00:00:00.001 Started by upstream project "autotest-nightly" build number 3912 00:00:00.001 originally caused by: 00:00:00.002 Started by user Latecki, Karol 00:00:00.003 Started by upstream project "autotest-nightly" build number 3911 00:00:00.003 originally caused by: 00:00:00.003 Started by user Latecki, Karol 00:00:00.004 Started by upstream project "autotest-nightly" build number 3909 00:00:00.004 originally caused by: 00:00:00.005 Started by user Latecki, Karol 00:00:00.005 Started by upstream project "autotest-nightly" build number 3908 00:00:00.005 originally caused by: 00:00:00.006 Started by user Latecki, Karol 00:00:00.128 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.129 The recommended git tool is: git 00:00:00.130 using credential 00000000-0000-0000-0000-000000000002 00:00:00.132 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.181 Fetching changes from the remote Git repository 00:00:00.182 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.230 Using shallow fetch with depth 1 00:00:00.230 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.230 > git --version # timeout=10 00:00:00.265 > git --version # 'git version 2.39.2' 00:00:00.265 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.298 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.298 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:07.814 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.826 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.836 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:07.836 > git config core.sparsecheckout # timeout=10 00:00:07.848 > git read-tree -mu HEAD # timeout=10 00:00:07.865 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:07.899 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:07.899 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:08.013 [Pipeline] Start of Pipeline 00:00:08.029 [Pipeline] library 00:00:08.031 Loading library shm_lib@master 00:00:08.031 Library shm_lib@master is cached. Copying from home. 00:00:08.047 [Pipeline] node 00:00:08.056 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.057 [Pipeline] { 00:00:08.069 [Pipeline] catchError 00:00:08.071 [Pipeline] { 00:00:08.079 [Pipeline] wrap 00:00:08.086 [Pipeline] { 00:00:08.091 [Pipeline] stage 00:00:08.092 [Pipeline] { (Prologue) 00:00:08.294 [Pipeline] sh 00:00:08.577 + logger -p user.info -t JENKINS-CI 00:00:08.592 [Pipeline] echo 00:00:08.593 Node: CYP9 00:00:08.598 [Pipeline] sh 00:00:08.899 [Pipeline] setCustomBuildProperty 00:00:08.908 [Pipeline] echo 00:00:08.909 Cleanup processes 00:00:08.912 [Pipeline] sh 00:00:09.192 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.192 3254162 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.206 [Pipeline] sh 00:00:09.492 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.492 ++ grep -v 'sudo pgrep' 00:00:09.492 ++ awk '{print $1}' 00:00:09.492 + sudo kill -9 00:00:09.492 + true 00:00:09.504 [Pipeline] cleanWs 00:00:09.511 [WS-CLEANUP] Deleting project workspace... 00:00:09.512 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.517 [WS-CLEANUP] done 00:00:09.520 [Pipeline] setCustomBuildProperty 00:00:09.532 [Pipeline] sh 00:00:09.811 + sudo git config --global --replace-all safe.directory '*' 00:00:09.927 [Pipeline] httpRequest 00:00:09.981 [Pipeline] echo 00:00:09.983 Sorcerer 10.211.164.101 is alive 00:00:09.991 [Pipeline] httpRequest 00:00:09.995 HttpMethod: GET 00:00:09.996 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:09.996 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:10.015 Response Code: HTTP/1.1 200 OK 00:00:10.016 Success: Status code 200 is in the accepted range: 200,404 00:00:10.016 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:16.017 [Pipeline] sh 00:00:16.304 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:16.322 [Pipeline] httpRequest 00:00:16.360 [Pipeline] echo 00:00:16.362 Sorcerer 10.211.164.101 is alive 00:00:16.371 [Pipeline] httpRequest 00:00:16.377 HttpMethod: GET 00:00:16.378 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:16.378 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:16.386 Response Code: HTTP/1.1 200 OK 00:00:16.386 Success: Status code 200 is in the accepted range: 200,404 00:00:16.387 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:47.368 [Pipeline] sh 00:01:47.655 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:50.211 [Pipeline] sh 00:01:50.499 + git -C spdk log --oneline -n5 00:01:50.499 f7b31b2b9 log: declare g_deprecation_epoch static 00:01:50.499 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:01:50.499 3731556bd lvol: declare g_lvol_if static 00:01:50.499 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:01:50.499 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:01:50.511 [Pipeline] } 00:01:50.528 [Pipeline] // stage 00:01:50.536 [Pipeline] stage 00:01:50.538 [Pipeline] { (Prepare) 00:01:50.555 [Pipeline] writeFile 00:01:50.570 [Pipeline] sh 00:01:50.855 + logger -p user.info -t JENKINS-CI 00:01:50.868 [Pipeline] sh 00:01:51.155 + logger -p user.info -t JENKINS-CI 00:01:51.168 [Pipeline] sh 00:01:51.462 + cat autorun-spdk.conf 00:01:51.465 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.465 SPDK_TEST_NVMF=1 00:01:51.465 SPDK_TEST_NVME_CLI=1 00:01:51.465 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.465 SPDK_TEST_NVMF_NICS=e810 00:01:51.465 SPDK_RUN_ASAN=1 00:01:51.465 SPDK_RUN_UBSAN=1 00:01:51.465 NET_TYPE=phy 00:01:51.482 RUN_NIGHTLY=1 00:01:51.502 [Pipeline] readFile 00:01:51.518 [Pipeline] withEnv 00:01:51.519 [Pipeline] { 00:01:51.527 [Pipeline] sh 00:01:51.809 + set -ex 00:01:51.809 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:51.809 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:51.809 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.809 ++ SPDK_TEST_NVMF=1 00:01:51.809 ++ SPDK_TEST_NVME_CLI=1 00:01:51.809 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.809 ++ SPDK_TEST_NVMF_NICS=e810 00:01:51.809 ++ SPDK_RUN_ASAN=1 00:01:51.809 ++ SPDK_RUN_UBSAN=1 00:01:51.809 ++ NET_TYPE=phy 00:01:51.809 ++ RUN_NIGHTLY=1 00:01:51.809 + case $SPDK_TEST_NVMF_NICS in 00:01:51.809 + DRIVERS=ice 00:01:51.809 + [[ tcp == \r\d\m\a ]] 00:01:51.809 + [[ -n ice ]] 00:01:51.809 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:51.809 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:51.809 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:51.809 rmmod: ERROR: Module irdma is not currently loaded 00:01:51.809 rmmod: ERROR: Module i40iw is not currently loaded 00:01:51.809 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:51.809 + true 00:01:51.809 + for D in $DRIVERS 00:01:51.809 + sudo modprobe ice 00:01:51.809 + exit 0 00:01:51.820 [Pipeline] } 00:01:51.838 [Pipeline] // withEnv 00:01:51.844 [Pipeline] } 00:01:51.863 [Pipeline] // stage 00:01:51.873 [Pipeline] catchError 00:01:51.875 [Pipeline] { 00:01:51.889 [Pipeline] timeout 00:01:51.890 Timeout set to expire in 50 min 00:01:51.892 [Pipeline] { 00:01:51.906 [Pipeline] stage 00:01:51.908 [Pipeline] { (Tests) 00:01:51.924 [Pipeline] sh 00:01:52.213 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.213 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.213 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.213 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:52.213 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:52.213 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:52.213 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:52.213 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:52.213 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:52.213 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:52.213 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:52.213 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.213 + source /etc/os-release 00:01:52.213 ++ NAME='Fedora Linux' 00:01:52.213 ++ VERSION='38 (Cloud Edition)' 00:01:52.213 ++ ID=fedora 00:01:52.213 ++ VERSION_ID=38 00:01:52.213 ++ VERSION_CODENAME= 00:01:52.213 ++ PLATFORM_ID=platform:f38 00:01:52.213 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:52.213 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.213 ++ LOGO=fedora-logo-icon 00:01:52.213 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:52.213 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.213 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:52.213 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.213 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.213 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.213 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:52.213 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.213 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:52.213 ++ SUPPORT_END=2024-05-14 00:01:52.213 ++ VARIANT='Cloud Edition' 00:01:52.213 ++ VARIANT_ID=cloud 00:01:52.213 + uname -a 00:01:52.213 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:52.213 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:55.511 Hugepages 00:01:55.511 node hugesize free / total 00:01:55.511 node0 1048576kB 0 / 0 00:01:55.511 node0 2048kB 0 / 0 00:01:55.511 node1 1048576kB 0 / 0 00:01:55.511 node1 2048kB 0 / 0 00:01:55.511 00:01:55.511 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:55.511 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:55.511 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:55.511 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:55.511 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:55.511 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:55.511 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:55.511 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:55.511 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:55.511 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:55.511 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:55.511 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:55.511 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:55.511 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:55.511 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:55.511 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:55.511 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:55.511 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:55.511 + rm -f /tmp/spdk-ld-path 00:01:55.511 + source autorun-spdk.conf 00:01:55.511 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.511 ++ SPDK_TEST_NVMF=1 00:01:55.511 ++ SPDK_TEST_NVME_CLI=1 00:01:55.511 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.511 ++ SPDK_TEST_NVMF_NICS=e810 00:01:55.511 ++ SPDK_RUN_ASAN=1 00:01:55.511 ++ SPDK_RUN_UBSAN=1 00:01:55.511 ++ NET_TYPE=phy 00:01:55.511 ++ RUN_NIGHTLY=1 00:01:55.511 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.511 + [[ -n '' ]] 00:01:55.511 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:55.511 + for M in /var/spdk/build-*-manifest.txt 00:01:55.511 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.511 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:55.511 + for M in /var/spdk/build-*-manifest.txt 00:01:55.511 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.511 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:55.511 ++ uname 00:01:55.511 + [[ Linux == \L\i\n\u\x ]] 00:01:55.511 + sudo dmesg -T 00:01:55.511 + sudo dmesg --clear 00:01:55.511 + dmesg_pid=3255247 00:01:55.511 + [[ Fedora Linux == FreeBSD ]] 00:01:55.511 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.511 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.511 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.511 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:55.511 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:55.511 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.511 + sudo dmesg -Tw 00:01:55.511 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.511 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.511 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.511 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.511 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.511 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.511 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.511 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.511 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.511 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.511 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:55.511 Test configuration: 00:01:55.511 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.511 SPDK_TEST_NVMF=1 00:01:55.511 SPDK_TEST_NVME_CLI=1 00:01:55.511 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.511 SPDK_TEST_NVMF_NICS=e810 00:01:55.511 SPDK_RUN_ASAN=1 00:01:55.511 SPDK_RUN_UBSAN=1 00:01:55.511 NET_TYPE=phy 00:01:55.511 RUN_NIGHTLY=1 20:09:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:55.511 20:09:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.511 20:09:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.512 20:09:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.512 20:09:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.512 20:09:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.512 20:09:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.512 20:09:07 -- paths/export.sh@5 -- $ export PATH 00:01:55.512 20:09:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.512 20:09:07 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:55.512 20:09:07 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:55.512 20:09:07 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721671747.XXXXXX 00:01:55.512 20:09:07 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721671747.y8EmKj 00:01:55.512 20:09:07 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:55.512 20:09:07 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:55.512 20:09:07 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:55.512 20:09:07 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:55.512 20:09:07 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.512 20:09:07 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:55.512 20:09:07 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:55.512 20:09:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.512 20:09:07 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:55.512 20:09:07 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:55.512 20:09:07 -- pm/common@17 -- $ local monitor 00:01:55.512 20:09:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.512 20:09:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.512 20:09:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.512 20:09:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.512 20:09:07 -- pm/common@21 -- $ date +%s 00:01:55.512 20:09:07 -- pm/common@25 -- $ sleep 1 00:01:55.512 20:09:07 -- pm/common@21 -- $ date +%s 00:01:55.512 20:09:07 -- pm/common@21 -- $ date +%s 00:01:55.512 20:09:07 -- pm/common@21 -- $ date +%s 00:01:55.512 20:09:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721671747 00:01:55.512 20:09:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721671747 00:01:55.512 20:09:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721671747 00:01:55.512 20:09:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721671747 00:01:55.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721671747_collect-vmstat.pm.log 00:01:55.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721671747_collect-cpu-load.pm.log 00:01:55.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721671747_collect-cpu-temp.pm.log 00:01:55.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721671747_collect-bmc-pm.bmc.pm.log 00:01:56.452 20:09:08 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:56.452 20:09:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:56.452 20:09:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:56.452 20:09:08 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:56.452 20:09:08 -- spdk/autobuild.sh@16 -- $ date -u 00:01:56.452 Mon Jul 22 06:09:08 PM UTC 2024 00:01:56.452 20:09:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:56.712 v24.09-pre-297-gf7b31b2b9 00:01:56.712 20:09:08 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:56.712 20:09:08 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:56.712 20:09:08 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:56.712 20:09:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:56.712 20:09:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.712 ************************************ 00:01:56.712 START TEST asan 00:01:56.712 ************************************ 00:01:56.712 20:09:08 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:56.712 using asan 00:01:56.712 00:01:56.712 real 0m0.000s 00:01:56.712 user 0m0.000s 00:01:56.712 sys 0m0.000s 00:01:56.712 20:09:08 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:56.712 20:09:08 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.712 ************************************ 00:01:56.712 END TEST asan 00:01:56.712 ************************************ 00:01:56.712 20:09:08 -- common/autotest_common.sh@1142 -- $ return 0 00:01:56.712 20:09:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:56.712 20:09:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:56.712 20:09:08 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:56.712 20:09:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:56.712 20:09:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.712 ************************************ 00:01:56.712 START TEST ubsan 00:01:56.712 ************************************ 00:01:56.712 20:09:08 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:56.712 using ubsan 00:01:56.712 00:01:56.712 real 0m0.000s 00:01:56.712 user 0m0.000s 00:01:56.712 sys 0m0.000s 00:01:56.712 20:09:08 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:56.712 20:09:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.712 ************************************ 00:01:56.712 END TEST ubsan 00:01:56.712 ************************************ 00:01:56.712 20:09:08 -- common/autotest_common.sh@1142 -- $ return 0 00:01:56.712 20:09:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:56.713 20:09:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:56.713 20:09:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:56.713 20:09:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:56.713 20:09:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:56.713 20:09:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:56.713 20:09:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:56.713 20:09:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:56.713 20:09:08 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:56.973 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:56.973 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:57.233 Using 'verbs' RDMA provider 00:02:13.140 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:25.377 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:25.377 Creating mk/config.mk...done. 00:02:25.377 Creating mk/cc.flags.mk...done. 00:02:25.377 Type 'make' to build. 00:02:25.377 20:09:36 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:25.377 20:09:36 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:25.377 20:09:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:25.377 20:09:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.377 ************************************ 00:02:25.377 START TEST make 00:02:25.377 ************************************ 00:02:25.377 20:09:37 make -- common/autotest_common.sh@1123 -- $ make -j144 00:02:25.638 make[1]: Nothing to be done for 'all'. 00:02:33.772 The Meson build system 00:02:33.772 Version: 1.3.1 00:02:33.772 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:33.772 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:33.772 Build type: native build 00:02:33.772 Program cat found: YES (/usr/bin/cat) 00:02:33.772 Project name: DPDK 00:02:33.772 Project version: 24.03.0 00:02:33.772 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:33.772 C linker for the host machine: cc ld.bfd 2.39-16 00:02:33.772 Host machine cpu family: x86_64 00:02:33.772 Host machine cpu: x86_64 00:02:33.772 Message: ## Building in Developer Mode ## 00:02:33.772 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:33.772 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:33.772 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:33.772 Program python3 found: YES (/usr/bin/python3) 00:02:33.772 Program cat found: YES (/usr/bin/cat) 00:02:33.772 Compiler for C supports arguments -march=native: YES 00:02:33.772 Checking for size of "void *" : 8 00:02:33.772 Checking for size of "void *" : 8 (cached) 00:02:33.772 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:33.772 Library m found: YES 00:02:33.772 Library numa found: YES 00:02:33.772 Has header "numaif.h" : YES 00:02:33.772 Library fdt found: NO 00:02:33.772 Library execinfo found: NO 00:02:33.772 Has header "execinfo.h" : YES 00:02:33.772 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:33.772 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:33.772 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:33.773 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:33.773 Run-time dependency openssl found: YES 3.0.9 00:02:33.773 Run-time dependency libpcap found: YES 1.10.4 00:02:33.773 Has header "pcap.h" with dependency libpcap: YES 00:02:33.773 Compiler for C supports arguments -Wcast-qual: YES 00:02:33.773 Compiler for C supports arguments -Wdeprecated: YES 00:02:33.773 Compiler for C supports arguments -Wformat: YES 00:02:33.773 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:33.773 Compiler for C supports arguments -Wformat-security: NO 00:02:33.773 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:33.773 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:33.773 Compiler for C supports arguments -Wnested-externs: YES 00:02:33.773 Compiler for C supports arguments -Wold-style-definition: YES 00:02:33.773 Compiler for C supports arguments -Wpointer-arith: YES 00:02:33.773 Compiler for C supports arguments -Wsign-compare: YES 00:02:33.773 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:33.773 Compiler for C supports arguments -Wundef: YES 00:02:33.773 Compiler for C supports arguments -Wwrite-strings: YES 00:02:33.773 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:33.773 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:33.773 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:33.773 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:33.773 Program objdump found: YES (/usr/bin/objdump) 00:02:33.773 Compiler for C supports arguments -mavx512f: YES 00:02:33.773 Checking if "AVX512 checking" compiles: YES 00:02:33.773 Fetching value of define "__SSE4_2__" : 1 00:02:33.773 Fetching value of define "__AES__" : 1 00:02:33.773 Fetching value of define "__AVX__" : 1 00:02:33.773 Fetching value of define "__AVX2__" : 1 00:02:33.773 Fetching value of define "__AVX512BW__" : 1 00:02:33.773 Fetching value of define "__AVX512CD__" : 1 00:02:33.773 Fetching value of define "__AVX512DQ__" : 1 00:02:33.773 Fetching value of define "__AVX512F__" : 1 00:02:33.773 Fetching value of define "__AVX512VL__" : 1 00:02:33.773 Fetching value of define "__PCLMUL__" : 1 00:02:33.773 Fetching value of define "__RDRND__" : 1 00:02:33.773 Fetching value of define "__RDSEED__" : 1 00:02:33.773 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:33.773 Fetching value of define "__znver1__" : (undefined) 00:02:33.773 Fetching value of define "__znver2__" : (undefined) 00:02:33.773 Fetching value of define "__znver3__" : (undefined) 00:02:33.773 Fetching value of define "__znver4__" : (undefined) 00:02:33.773 Library asan found: YES 00:02:33.773 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:33.773 Message: lib/log: Defining dependency "log" 00:02:33.773 Message: lib/kvargs: Defining dependency "kvargs" 00:02:33.773 Message: lib/telemetry: Defining dependency "telemetry" 00:02:33.773 Library rt found: YES 00:02:33.773 Checking for function "getentropy" : NO 00:02:33.773 Message: lib/eal: Defining dependency "eal" 00:02:33.773 Message: lib/ring: Defining dependency "ring" 00:02:33.773 Message: lib/rcu: Defining dependency "rcu" 00:02:33.773 Message: lib/mempool: Defining dependency "mempool" 00:02:33.773 Message: lib/mbuf: Defining dependency "mbuf" 00:02:33.773 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:33.773 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:33.773 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:33.773 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:33.773 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:33.773 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:33.773 Compiler for C supports arguments -mpclmul: YES 00:02:33.773 Compiler for C supports arguments -maes: YES 00:02:33.773 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:33.773 Compiler for C supports arguments -mavx512bw: YES 00:02:33.773 Compiler for C supports arguments -mavx512dq: YES 00:02:33.773 Compiler for C supports arguments -mavx512vl: YES 00:02:33.773 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:33.773 Compiler for C supports arguments -mavx2: YES 00:02:33.773 Compiler for C supports arguments -mavx: YES 00:02:33.773 Message: lib/net: Defining dependency "net" 00:02:33.773 Message: lib/meter: Defining dependency "meter" 00:02:33.773 Message: lib/ethdev: Defining dependency "ethdev" 00:02:33.773 Message: lib/pci: Defining dependency "pci" 00:02:33.773 Message: lib/cmdline: Defining dependency "cmdline" 00:02:33.773 Message: lib/hash: Defining dependency "hash" 00:02:33.773 Message: lib/timer: Defining dependency "timer" 00:02:33.773 Message: lib/compressdev: Defining dependency "compressdev" 00:02:33.773 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:33.773 Message: lib/dmadev: Defining dependency "dmadev" 00:02:33.773 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:33.773 Message: lib/power: Defining dependency "power" 00:02:33.773 Message: lib/reorder: Defining dependency "reorder" 00:02:33.773 Message: lib/security: Defining dependency "security" 00:02:33.773 Has header "linux/userfaultfd.h" : YES 00:02:33.773 Has header "linux/vduse.h" : YES 00:02:33.773 Message: lib/vhost: Defining dependency "vhost" 00:02:33.773 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:33.773 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:33.773 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:33.773 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:33.773 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:33.773 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:33.773 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:33.773 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:33.773 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:33.773 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:33.773 Program doxygen found: YES (/usr/bin/doxygen) 00:02:33.773 Configuring doxy-api-html.conf using configuration 00:02:33.773 Configuring doxy-api-man.conf using configuration 00:02:33.773 Program mandb found: YES (/usr/bin/mandb) 00:02:33.773 Program sphinx-build found: NO 00:02:33.773 Configuring rte_build_config.h using configuration 00:02:33.773 Message: 00:02:33.773 ================= 00:02:33.773 Applications Enabled 00:02:33.773 ================= 00:02:33.773 00:02:33.773 apps: 00:02:33.773 00:02:33.773 00:02:33.773 Message: 00:02:33.773 ================= 00:02:33.773 Libraries Enabled 00:02:33.773 ================= 00:02:33.773 00:02:33.773 libs: 00:02:33.773 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:33.773 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:33.773 cryptodev, dmadev, power, reorder, security, vhost, 00:02:33.773 00:02:33.773 Message: 00:02:33.773 =============== 00:02:33.773 Drivers Enabled 00:02:33.773 =============== 00:02:33.773 00:02:33.773 common: 00:02:33.773 00:02:33.773 bus: 00:02:33.773 pci, vdev, 00:02:33.773 mempool: 00:02:33.773 ring, 00:02:33.773 dma: 00:02:33.773 00:02:33.773 net: 00:02:33.773 00:02:33.773 crypto: 00:02:33.773 00:02:33.773 compress: 00:02:33.773 00:02:33.773 vdpa: 00:02:33.773 00:02:33.773 00:02:33.773 Message: 00:02:33.773 ================= 00:02:33.773 Content Skipped 00:02:33.773 ================= 00:02:33.773 00:02:33.773 apps: 00:02:33.773 dumpcap: explicitly disabled via build config 00:02:33.773 graph: explicitly disabled via build config 00:02:33.774 pdump: explicitly disabled via build config 00:02:33.774 proc-info: explicitly disabled via build config 00:02:33.774 test-acl: explicitly disabled via build config 00:02:33.774 test-bbdev: explicitly disabled via build config 00:02:33.774 test-cmdline: explicitly disabled via build config 00:02:33.774 test-compress-perf: explicitly disabled via build config 00:02:33.774 test-crypto-perf: explicitly disabled via build config 00:02:33.774 test-dma-perf: explicitly disabled via build config 00:02:33.774 test-eventdev: explicitly disabled via build config 00:02:33.774 test-fib: explicitly disabled via build config 00:02:33.774 test-flow-perf: explicitly disabled via build config 00:02:33.774 test-gpudev: explicitly disabled via build config 00:02:33.774 test-mldev: explicitly disabled via build config 00:02:33.774 test-pipeline: explicitly disabled via build config 00:02:33.774 test-pmd: explicitly disabled via build config 00:02:33.774 test-regex: explicitly disabled via build config 00:02:33.774 test-sad: explicitly disabled via build config 00:02:33.774 test-security-perf: explicitly disabled via build config 00:02:33.774 00:02:33.774 libs: 00:02:33.774 argparse: explicitly disabled via build config 00:02:33.774 metrics: explicitly disabled via build config 00:02:33.774 acl: explicitly disabled via build config 00:02:33.774 bbdev: explicitly disabled via build config 00:02:33.774 bitratestats: explicitly disabled via build config 00:02:33.774 bpf: explicitly disabled via build config 00:02:33.774 cfgfile: explicitly disabled via build config 00:02:33.774 distributor: explicitly disabled via build config 00:02:33.774 efd: explicitly disabled via build config 00:02:33.774 eventdev: explicitly disabled via build config 00:02:33.774 dispatcher: explicitly disabled via build config 00:02:33.774 gpudev: explicitly disabled via build config 00:02:33.774 gro: explicitly disabled via build config 00:02:33.774 gso: explicitly disabled via build config 00:02:33.774 ip_frag: explicitly disabled via build config 00:02:33.774 jobstats: explicitly disabled via build config 00:02:33.774 latencystats: explicitly disabled via build config 00:02:33.774 lpm: explicitly disabled via build config 00:02:33.774 member: explicitly disabled via build config 00:02:33.774 pcapng: explicitly disabled via build config 00:02:33.774 rawdev: explicitly disabled via build config 00:02:33.774 regexdev: explicitly disabled via build config 00:02:33.774 mldev: explicitly disabled via build config 00:02:33.774 rib: explicitly disabled via build config 00:02:33.774 sched: explicitly disabled via build config 00:02:33.774 stack: explicitly disabled via build config 00:02:33.774 ipsec: explicitly disabled via build config 00:02:33.774 pdcp: explicitly disabled via build config 00:02:33.774 fib: explicitly disabled via build config 00:02:33.774 port: explicitly disabled via build config 00:02:33.774 pdump: explicitly disabled via build config 00:02:33.774 table: explicitly disabled via build config 00:02:33.774 pipeline: explicitly disabled via build config 00:02:33.774 graph: explicitly disabled via build config 00:02:33.774 node: explicitly disabled via build config 00:02:33.774 00:02:33.774 drivers: 00:02:33.774 common/cpt: not in enabled drivers build config 00:02:33.774 common/dpaax: not in enabled drivers build config 00:02:33.774 common/iavf: not in enabled drivers build config 00:02:33.774 common/idpf: not in enabled drivers build config 00:02:33.774 common/ionic: not in enabled drivers build config 00:02:33.774 common/mvep: not in enabled drivers build config 00:02:33.774 common/octeontx: not in enabled drivers build config 00:02:33.774 bus/auxiliary: not in enabled drivers build config 00:02:33.774 bus/cdx: not in enabled drivers build config 00:02:33.774 bus/dpaa: not in enabled drivers build config 00:02:33.774 bus/fslmc: not in enabled drivers build config 00:02:33.774 bus/ifpga: not in enabled drivers build config 00:02:33.774 bus/platform: not in enabled drivers build config 00:02:33.774 bus/uacce: not in enabled drivers build config 00:02:33.774 bus/vmbus: not in enabled drivers build config 00:02:33.774 common/cnxk: not in enabled drivers build config 00:02:33.774 common/mlx5: not in enabled drivers build config 00:02:33.774 common/nfp: not in enabled drivers build config 00:02:33.774 common/nitrox: not in enabled drivers build config 00:02:33.774 common/qat: not in enabled drivers build config 00:02:33.774 common/sfc_efx: not in enabled drivers build config 00:02:33.774 mempool/bucket: not in enabled drivers build config 00:02:33.774 mempool/cnxk: not in enabled drivers build config 00:02:33.774 mempool/dpaa: not in enabled drivers build config 00:02:33.774 mempool/dpaa2: not in enabled drivers build config 00:02:33.774 mempool/octeontx: not in enabled drivers build config 00:02:33.774 mempool/stack: not in enabled drivers build config 00:02:33.774 dma/cnxk: not in enabled drivers build config 00:02:33.774 dma/dpaa: not in enabled drivers build config 00:02:33.774 dma/dpaa2: not in enabled drivers build config 00:02:33.774 dma/hisilicon: not in enabled drivers build config 00:02:33.774 dma/idxd: not in enabled drivers build config 00:02:33.774 dma/ioat: not in enabled drivers build config 00:02:33.774 dma/skeleton: not in enabled drivers build config 00:02:33.774 net/af_packet: not in enabled drivers build config 00:02:33.774 net/af_xdp: not in enabled drivers build config 00:02:33.774 net/ark: not in enabled drivers build config 00:02:33.774 net/atlantic: not in enabled drivers build config 00:02:33.774 net/avp: not in enabled drivers build config 00:02:33.774 net/axgbe: not in enabled drivers build config 00:02:33.774 net/bnx2x: not in enabled drivers build config 00:02:33.774 net/bnxt: not in enabled drivers build config 00:02:33.774 net/bonding: not in enabled drivers build config 00:02:33.774 net/cnxk: not in enabled drivers build config 00:02:33.774 net/cpfl: not in enabled drivers build config 00:02:33.774 net/cxgbe: not in enabled drivers build config 00:02:33.774 net/dpaa: not in enabled drivers build config 00:02:33.774 net/dpaa2: not in enabled drivers build config 00:02:33.774 net/e1000: not in enabled drivers build config 00:02:33.774 net/ena: not in enabled drivers build config 00:02:33.774 net/enetc: not in enabled drivers build config 00:02:33.774 net/enetfec: not in enabled drivers build config 00:02:33.774 net/enic: not in enabled drivers build config 00:02:33.774 net/failsafe: not in enabled drivers build config 00:02:33.774 net/fm10k: not in enabled drivers build config 00:02:33.774 net/gve: not in enabled drivers build config 00:02:33.774 net/hinic: not in enabled drivers build config 00:02:33.774 net/hns3: not in enabled drivers build config 00:02:33.774 net/i40e: not in enabled drivers build config 00:02:33.774 net/iavf: not in enabled drivers build config 00:02:33.774 net/ice: not in enabled drivers build config 00:02:33.774 net/idpf: not in enabled drivers build config 00:02:33.774 net/igc: not in enabled drivers build config 00:02:33.774 net/ionic: not in enabled drivers build config 00:02:33.774 net/ipn3ke: not in enabled drivers build config 00:02:33.774 net/ixgbe: not in enabled drivers build config 00:02:33.774 net/mana: not in enabled drivers build config 00:02:33.774 net/memif: not in enabled drivers build config 00:02:33.774 net/mlx4: not in enabled drivers build config 00:02:33.774 net/mlx5: not in enabled drivers build config 00:02:33.774 net/mvneta: not in enabled drivers build config 00:02:33.774 net/mvpp2: not in enabled drivers build config 00:02:33.774 net/netvsc: not in enabled drivers build config 00:02:33.774 net/nfb: not in enabled drivers build config 00:02:33.774 net/nfp: not in enabled drivers build config 00:02:33.774 net/ngbe: not in enabled drivers build config 00:02:33.774 net/null: not in enabled drivers build config 00:02:33.774 net/octeontx: not in enabled drivers build config 00:02:33.774 net/octeon_ep: not in enabled drivers build config 00:02:33.774 net/pcap: not in enabled drivers build config 00:02:33.774 net/pfe: not in enabled drivers build config 00:02:33.775 net/qede: not in enabled drivers build config 00:02:33.775 net/ring: not in enabled drivers build config 00:02:33.775 net/sfc: not in enabled drivers build config 00:02:33.775 net/softnic: not in enabled drivers build config 00:02:33.775 net/tap: not in enabled drivers build config 00:02:33.775 net/thunderx: not in enabled drivers build config 00:02:33.775 net/txgbe: not in enabled drivers build config 00:02:33.775 net/vdev_netvsc: not in enabled drivers build config 00:02:33.775 net/vhost: not in enabled drivers build config 00:02:33.775 net/virtio: not in enabled drivers build config 00:02:33.775 net/vmxnet3: not in enabled drivers build config 00:02:33.775 raw/*: missing internal dependency, "rawdev" 00:02:33.775 crypto/armv8: not in enabled drivers build config 00:02:33.775 crypto/bcmfs: not in enabled drivers build config 00:02:33.775 crypto/caam_jr: not in enabled drivers build config 00:02:33.775 crypto/ccp: not in enabled drivers build config 00:02:33.775 crypto/cnxk: not in enabled drivers build config 00:02:33.775 crypto/dpaa_sec: not in enabled drivers build config 00:02:33.775 crypto/dpaa2_sec: not in enabled drivers build config 00:02:33.775 crypto/ipsec_mb: not in enabled drivers build config 00:02:33.775 crypto/mlx5: not in enabled drivers build config 00:02:33.775 crypto/mvsam: not in enabled drivers build config 00:02:33.775 crypto/nitrox: not in enabled drivers build config 00:02:33.775 crypto/null: not in enabled drivers build config 00:02:33.775 crypto/octeontx: not in enabled drivers build config 00:02:33.775 crypto/openssl: not in enabled drivers build config 00:02:33.775 crypto/scheduler: not in enabled drivers build config 00:02:33.775 crypto/uadk: not in enabled drivers build config 00:02:33.775 crypto/virtio: not in enabled drivers build config 00:02:33.775 compress/isal: not in enabled drivers build config 00:02:33.775 compress/mlx5: not in enabled drivers build config 00:02:33.775 compress/nitrox: not in enabled drivers build config 00:02:33.775 compress/octeontx: not in enabled drivers build config 00:02:33.775 compress/zlib: not in enabled drivers build config 00:02:33.775 regex/*: missing internal dependency, "regexdev" 00:02:33.775 ml/*: missing internal dependency, "mldev" 00:02:33.775 vdpa/ifc: not in enabled drivers build config 00:02:33.775 vdpa/mlx5: not in enabled drivers build config 00:02:33.775 vdpa/nfp: not in enabled drivers build config 00:02:33.775 vdpa/sfc: not in enabled drivers build config 00:02:33.775 event/*: missing internal dependency, "eventdev" 00:02:33.775 baseband/*: missing internal dependency, "bbdev" 00:02:33.775 gpu/*: missing internal dependency, "gpudev" 00:02:33.775 00:02:33.775 00:02:34.036 Build targets in project: 84 00:02:34.036 00:02:34.036 DPDK 24.03.0 00:02:34.036 00:02:34.036 User defined options 00:02:34.036 buildtype : debug 00:02:34.036 default_library : shared 00:02:34.036 libdir : lib 00:02:34.036 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:34.036 b_sanitize : address 00:02:34.036 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:34.036 c_link_args : 00:02:34.036 cpu_instruction_set: native 00:02:34.036 disable_apps : test-acl,graph,test-dma-perf,test-gpudev,test-crypto-perf,test,test-security-perf,test-mldev,proc-info,test-pmd,test-pipeline,test-eventdev,test-cmdline,test-fib,pdump,test-flow-perf,test-bbdev,test-regex,test-sad,dumpcap,test-compress-perf 00:02:34.036 disable_libs : acl,bitratestats,graph,bbdev,jobstats,ipsec,gso,table,rib,node,mldev,sched,ip_frag,cfgfile,port,pcapng,pdcp,argparse,stack,eventdev,regexdev,distributor,gro,efd,pipeline,bpf,dispatcher,lpm,metrics,latencystats,pdump,gpudev,member,fib,rawdev 00:02:34.036 enable_docs : false 00:02:34.036 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:34.036 enable_kmods : false 00:02:34.036 max_lcores : 128 00:02:34.036 tests : false 00:02:34.036 00:02:34.036 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.620 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:34.620 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.620 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.620 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.620 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:34.620 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:34.620 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:34.620 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.620 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:34.620 [9/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.620 [10/267] Linking static target lib/librte_kvargs.a 00:02:34.620 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:34.620 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:34.620 [13/267] Linking static target lib/librte_log.a 00:02:34.620 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.620 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:34.620 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:34.620 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:34.620 [18/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:34.884 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:34.884 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.884 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.884 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:34.884 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.884 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.884 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.884 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.884 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.884 [28/267] Linking static target lib/librte_pci.a 00:02:34.884 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:34.884 [30/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:34.884 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.884 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.884 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.884 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:34.884 [35/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.884 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.884 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:34.884 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:35.143 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:35.143 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:35.143 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:35.143 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.143 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.143 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.143 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:35.143 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.143 [47/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:35.143 [48/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:35.143 [49/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.143 [50/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:35.143 [51/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:35.143 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:35.143 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:35.143 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:35.143 [55/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.143 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:35.143 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:35.143 [58/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.143 [59/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:35.143 [60/267] Linking static target lib/librte_telemetry.a 00:02:35.143 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.143 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:35.143 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:35.143 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.143 [65/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.143 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.143 [67/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:35.143 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.143 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:35.143 [70/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:35.143 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:35.143 [72/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:35.143 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:35.143 [74/267] Linking static target lib/librte_timer.a 00:02:35.143 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:35.143 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:35.143 [77/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:35.143 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.143 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:35.143 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:35.143 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.143 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:35.143 [83/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:35.143 [84/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:35.143 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:35.143 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:35.143 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:35.143 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:35.143 [89/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:35.143 [90/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:35.143 [91/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:35.143 [92/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:35.143 [93/267] Linking static target lib/librte_meter.a 00:02:35.143 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:35.404 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.404 [96/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:35.404 [97/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:35.404 [98/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:35.404 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:35.404 [100/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:35.404 [101/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:35.404 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:35.404 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:35.404 [104/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.404 [105/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.404 [106/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:35.404 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:35.404 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:35.404 [109/267] Linking static target lib/librte_cmdline.a 00:02:35.404 [110/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:35.404 [111/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:35.404 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.405 [113/267] Linking static target lib/librte_ring.a 00:02:35.405 [114/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.405 [115/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:35.405 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:35.405 [117/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:35.405 [118/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:35.405 [119/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.405 [120/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.405 [121/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:35.405 [122/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:35.405 [123/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:35.405 [124/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:35.405 [125/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:35.405 [126/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:35.405 [127/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:35.405 [128/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:35.405 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:35.405 [130/267] Linking target lib/librte_log.so.24.1 00:02:35.405 [131/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:35.405 [132/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:35.405 [133/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.405 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:35.405 [135/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:35.405 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.405 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:35.405 [138/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:35.405 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:35.405 [140/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:35.405 [141/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:35.405 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:35.405 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:35.405 [144/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:35.405 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:35.405 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:35.405 [147/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:35.405 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:35.405 [149/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:35.405 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:35.405 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:35.405 [152/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:35.405 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.405 [154/267] Linking static target lib/librte_power.a 00:02:35.405 [155/267] Linking static target lib/librte_dmadev.a 00:02:35.405 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:35.405 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:35.405 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:35.405 [159/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:35.405 [160/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:35.405 [161/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:35.405 [162/267] Linking static target lib/librte_compressdev.a 00:02:35.405 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:35.405 [164/267] Linking static target lib/librte_mempool.a 00:02:35.405 [165/267] Linking static target lib/librte_rcu.a 00:02:35.405 [166/267] Linking static target lib/librte_security.a 00:02:35.405 [167/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:35.405 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:35.405 [169/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:35.405 [170/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:35.405 [171/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:35.405 [172/267] Linking static target lib/librte_reorder.a 00:02:35.405 [173/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.405 [174/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:35.665 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:35.665 [176/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:35.665 [177/267] Linking target lib/librte_kvargs.so.24.1 00:02:35.665 [178/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:35.665 [179/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:35.665 [180/267] Linking static target lib/librte_net.a 00:02:35.665 [181/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.665 [182/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.665 [183/267] Linking static target drivers/librte_bus_vdev.a 00:02:35.665 [184/267] Linking static target lib/librte_eal.a 00:02:35.665 [185/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:35.665 [186/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:35.665 [187/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:35.665 [188/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.665 [189/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.665 [190/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:35.665 [191/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.665 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:35.665 [193/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:35.665 [194/267] Linking target lib/librte_telemetry.so.24.1 00:02:35.665 [195/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.665 [196/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.927 [197/267] Linking static target drivers/librte_bus_pci.a 00:02:35.927 [198/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:35.927 [199/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.927 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:35.927 [201/267] Linking static target drivers/librte_mempool_ring.a 00:02:35.927 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.927 [203/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.927 [204/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:35.927 [205/267] Linking static target lib/librte_mbuf.a 00:02:35.927 [206/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:35.927 [207/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:35.927 [208/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.927 [209/267] Linking static target lib/librte_hash.a 00:02:35.927 [210/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.927 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.188 [212/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:36.188 [213/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.188 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.188 [215/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:36.188 [216/267] Linking static target lib/librte_cryptodev.a 00:02:36.188 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.448 [218/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.448 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.709 [220/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.709 [221/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.709 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.969 [223/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.230 [224/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:37.230 [225/267] Linking static target lib/librte_ethdev.a 00:02:37.230 [226/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:38.616 [227/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.528 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:40.528 [229/267] Linking static target lib/librte_vhost.a 00:02:42.445 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.653 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.225 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.225 [233/267] Linking target lib/librte_eal.so.24.1 00:02:47.225 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:47.225 [235/267] Linking target lib/librte_ring.so.24.1 00:02:47.225 [236/267] Linking target lib/librte_pci.so.24.1 00:02:47.225 [237/267] Linking target lib/librte_meter.so.24.1 00:02:47.225 [238/267] Linking target lib/librte_timer.so.24.1 00:02:47.225 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:47.225 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:47.485 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:47.485 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:47.485 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:47.485 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:47.485 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:47.485 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:47.485 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:47.485 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:47.746 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:47.746 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:47.746 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:47.746 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:47.746 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:47.746 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:48.007 [255/267] Linking target lib/librte_net.so.24.1 00:02:48.007 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:48.007 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:48.007 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:48.007 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:48.007 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:48.007 [261/267] Linking target lib/librte_security.so.24.1 00:02:48.007 [262/267] Linking target lib/librte_hash.so.24.1 00:02:48.007 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:48.268 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:48.268 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:48.268 [266/267] Linking target lib/librte_power.so.24.1 00:02:48.268 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:48.268 INFO: autodetecting backend as ninja 00:02:48.268 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:49.653 CC lib/ut_mock/mock.o 00:02:49.653 CC lib/log/log.o 00:02:49.653 CC lib/log/log_flags.o 00:02:49.653 CC lib/log/log_deprecated.o 00:02:49.653 CC lib/ut/ut.o 00:02:49.653 LIB libspdk_ut_mock.a 00:02:49.653 LIB libspdk_log.a 00:02:49.653 LIB libspdk_ut.a 00:02:49.653 SO libspdk_ut_mock.so.6.0 00:02:49.653 SO libspdk_log.so.7.0 00:02:49.653 SO libspdk_ut.so.2.0 00:02:49.653 SYMLINK libspdk_ut_mock.so 00:02:49.653 SYMLINK libspdk_ut.so 00:02:49.914 SYMLINK libspdk_log.so 00:02:50.173 CC lib/util/base64.o 00:02:50.173 CC lib/dma/dma.o 00:02:50.173 CC lib/util/cpuset.o 00:02:50.173 CC lib/util/bit_array.o 00:02:50.173 CC lib/util/crc16.o 00:02:50.173 CC lib/util/crc32_ieee.o 00:02:50.173 CC lib/util/crc32.o 00:02:50.173 CC lib/util/crc32c.o 00:02:50.173 CC lib/util/crc64.o 00:02:50.173 CC lib/util/fd.o 00:02:50.173 CC lib/util/dif.o 00:02:50.173 CC lib/util/hexlify.o 00:02:50.173 CC lib/util/fd_group.o 00:02:50.173 CXX lib/trace_parser/trace.o 00:02:50.173 CC lib/util/file.o 00:02:50.173 CC lib/ioat/ioat.o 00:02:50.173 CC lib/util/iov.o 00:02:50.173 CC lib/util/math.o 00:02:50.173 CC lib/util/net.o 00:02:50.173 CC lib/util/pipe.o 00:02:50.173 CC lib/util/strerror_tls.o 00:02:50.173 CC lib/util/string.o 00:02:50.173 CC lib/util/uuid.o 00:02:50.173 CC lib/util/xor.o 00:02:50.173 CC lib/util/zipf.o 00:02:50.433 CC lib/vfio_user/host/vfio_user_pci.o 00:02:50.433 CC lib/vfio_user/host/vfio_user.o 00:02:50.433 LIB libspdk_dma.a 00:02:50.433 SO libspdk_dma.so.4.0 00:02:50.433 LIB libspdk_ioat.a 00:02:50.433 SYMLINK libspdk_dma.so 00:02:50.433 SO libspdk_ioat.so.7.0 00:02:50.692 SYMLINK libspdk_ioat.so 00:02:50.692 LIB libspdk_vfio_user.a 00:02:50.692 SO libspdk_vfio_user.so.5.0 00:02:50.692 SYMLINK libspdk_vfio_user.so 00:02:50.692 LIB libspdk_util.a 00:02:50.984 SO libspdk_util.so.10.0 00:02:50.984 SYMLINK libspdk_util.so 00:02:51.277 LIB libspdk_trace_parser.a 00:02:51.277 SO libspdk_trace_parser.so.5.0 00:02:51.277 SYMLINK libspdk_trace_parser.so 00:02:51.277 CC lib/vmd/vmd.o 00:02:51.277 CC lib/vmd/led.o 00:02:51.277 CC lib/env_dpdk/env.o 00:02:51.277 CC lib/env_dpdk/memory.o 00:02:51.277 CC lib/env_dpdk/pci.o 00:02:51.277 CC lib/env_dpdk/init.o 00:02:51.277 CC lib/env_dpdk/threads.o 00:02:51.277 CC lib/json/json_parse.o 00:02:51.277 CC lib/conf/conf.o 00:02:51.277 CC lib/env_dpdk/pci_ioat.o 00:02:51.277 CC lib/json/json_util.o 00:02:51.277 CC lib/env_dpdk/pci_virtio.o 00:02:51.277 CC lib/env_dpdk/pci_vmd.o 00:02:51.277 CC lib/env_dpdk/pci_idxd.o 00:02:51.277 CC lib/rdma_provider/common.o 00:02:51.277 CC lib/json/json_write.o 00:02:51.277 CC lib/rdma_utils/rdma_utils.o 00:02:51.277 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:51.277 CC lib/env_dpdk/pci_event.o 00:02:51.277 CC lib/env_dpdk/sigbus_handler.o 00:02:51.277 CC lib/env_dpdk/pci_dpdk.o 00:02:51.277 CC lib/idxd/idxd.o 00:02:51.277 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:51.277 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:51.277 CC lib/idxd/idxd_user.o 00:02:51.277 CC lib/idxd/idxd_kernel.o 00:02:51.537 LIB libspdk_rdma_provider.a 00:02:51.537 LIB libspdk_conf.a 00:02:51.537 SO libspdk_conf.so.6.0 00:02:51.797 SO libspdk_rdma_provider.so.6.0 00:02:51.797 LIB libspdk_rdma_utils.a 00:02:51.797 SO libspdk_rdma_utils.so.1.0 00:02:51.797 LIB libspdk_json.a 00:02:51.797 SYMLINK libspdk_conf.so 00:02:51.797 SYMLINK libspdk_rdma_provider.so 00:02:51.797 SYMLINK libspdk_rdma_utils.so 00:02:51.797 SO libspdk_json.so.6.0 00:02:51.797 SYMLINK libspdk_json.so 00:02:51.797 LIB libspdk_vmd.a 00:02:52.058 SO libspdk_vmd.so.6.0 00:02:52.058 SYMLINK libspdk_vmd.so 00:02:52.058 LIB libspdk_idxd.a 00:02:52.058 SO libspdk_idxd.so.12.0 00:02:52.058 SYMLINK libspdk_idxd.so 00:02:52.058 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:52.058 CC lib/jsonrpc/jsonrpc_server.o 00:02:52.318 CC lib/jsonrpc/jsonrpc_client.o 00:02:52.318 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:52.579 LIB libspdk_jsonrpc.a 00:02:52.579 SO libspdk_jsonrpc.so.6.0 00:02:52.579 SYMLINK libspdk_jsonrpc.so 00:02:52.840 LIB libspdk_env_dpdk.a 00:02:52.840 CC lib/rpc/rpc.o 00:02:53.101 SO libspdk_env_dpdk.so.15.0 00:02:53.101 SYMLINK libspdk_env_dpdk.so 00:02:53.101 LIB libspdk_rpc.a 00:02:53.362 SO libspdk_rpc.so.6.0 00:02:53.362 SYMLINK libspdk_rpc.so 00:02:53.623 CC lib/keyring/keyring.o 00:02:53.623 CC lib/keyring/keyring_rpc.o 00:02:53.623 CC lib/notify/notify.o 00:02:53.623 CC lib/notify/notify_rpc.o 00:02:53.623 CC lib/trace/trace.o 00:02:53.623 CC lib/trace/trace_flags.o 00:02:53.623 CC lib/trace/trace_rpc.o 00:02:53.884 LIB libspdk_notify.a 00:02:53.884 SO libspdk_notify.so.6.0 00:02:53.884 LIB libspdk_keyring.a 00:02:53.884 SO libspdk_keyring.so.1.0 00:02:53.884 LIB libspdk_trace.a 00:02:53.884 SYMLINK libspdk_notify.so 00:02:53.884 SO libspdk_trace.so.10.0 00:02:54.144 SYMLINK libspdk_keyring.so 00:02:54.144 SYMLINK libspdk_trace.so 00:02:54.406 CC lib/sock/sock.o 00:02:54.406 CC lib/sock/sock_rpc.o 00:02:54.406 CC lib/thread/thread.o 00:02:54.406 CC lib/thread/iobuf.o 00:02:54.980 LIB libspdk_sock.a 00:02:54.980 SO libspdk_sock.so.10.0 00:02:54.980 SYMLINK libspdk_sock.so 00:02:55.240 CC lib/nvme/nvme_ctrlr.o 00:02:55.240 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:55.240 CC lib/nvme/nvme_fabric.o 00:02:55.240 CC lib/nvme/nvme_ns_cmd.o 00:02:55.240 CC lib/nvme/nvme_ns.o 00:02:55.240 CC lib/nvme/nvme_pcie_common.o 00:02:55.240 CC lib/nvme/nvme_pcie.o 00:02:55.240 CC lib/nvme/nvme_qpair.o 00:02:55.240 CC lib/nvme/nvme.o 00:02:55.240 CC lib/nvme/nvme_quirks.o 00:02:55.240 CC lib/nvme/nvme_transport.o 00:02:55.240 CC lib/nvme/nvme_discovery.o 00:02:55.240 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.240 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.240 CC lib/nvme/nvme_tcp.o 00:02:55.240 CC lib/nvme/nvme_opal.o 00:02:55.240 CC lib/nvme/nvme_io_msg.o 00:02:55.240 CC lib/nvme/nvme_poll_group.o 00:02:55.240 CC lib/nvme/nvme_zns.o 00:02:55.240 CC lib/nvme/nvme_stubs.o 00:02:55.240 CC lib/nvme/nvme_auth.o 00:02:55.241 CC lib/nvme/nvme_cuse.o 00:02:55.241 CC lib/nvme/nvme_rdma.o 00:02:56.185 LIB libspdk_thread.a 00:02:56.185 SO libspdk_thread.so.10.1 00:02:56.185 SYMLINK libspdk_thread.so 00:02:56.446 CC lib/init/json_config.o 00:02:56.446 CC lib/init/subsystem.o 00:02:56.446 CC lib/init/rpc.o 00:02:56.446 CC lib/init/subsystem_rpc.o 00:02:56.446 CC lib/blob/blobstore.o 00:02:56.446 CC lib/blob/request.o 00:02:56.446 CC lib/virtio/virtio.o 00:02:56.446 CC lib/blob/zeroes.o 00:02:56.446 CC lib/accel/accel.o 00:02:56.446 CC lib/blob/blob_bs_dev.o 00:02:56.446 CC lib/virtio/virtio_vhost_user.o 00:02:56.446 CC lib/accel/accel_rpc.o 00:02:56.446 CC lib/virtio/virtio_vfio_user.o 00:02:56.446 CC lib/accel/accel_sw.o 00:02:56.446 CC lib/virtio/virtio_pci.o 00:02:57.016 LIB libspdk_init.a 00:02:57.016 SO libspdk_init.so.5.0 00:02:57.016 LIB libspdk_virtio.a 00:02:57.016 SYMLINK libspdk_init.so 00:02:57.016 SO libspdk_virtio.so.7.0 00:02:57.016 SYMLINK libspdk_virtio.so 00:02:57.276 CC lib/event/app.o 00:02:57.276 CC lib/event/reactor.o 00:02:57.276 CC lib/event/log_rpc.o 00:02:57.276 CC lib/event/app_rpc.o 00:02:57.276 CC lib/event/scheduler_static.o 00:02:57.536 LIB libspdk_nvme.a 00:02:57.796 LIB libspdk_accel.a 00:02:57.796 SO libspdk_accel.so.16.0 00:02:57.796 SO libspdk_nvme.so.13.1 00:02:57.796 SYMLINK libspdk_accel.so 00:02:57.796 LIB libspdk_event.a 00:02:58.055 SO libspdk_event.so.14.0 00:02:58.055 SYMLINK libspdk_event.so 00:02:58.055 SYMLINK libspdk_nvme.so 00:02:58.316 CC lib/bdev/bdev.o 00:02:58.316 CC lib/bdev/bdev_rpc.o 00:02:58.316 CC lib/bdev/bdev_zone.o 00:02:58.316 CC lib/bdev/part.o 00:02:58.316 CC lib/bdev/scsi_nvme.o 00:02:59.695 LIB libspdk_blob.a 00:02:59.955 SO libspdk_blob.so.11.0 00:02:59.956 SYMLINK libspdk_blob.so 00:03:00.216 CC lib/lvol/lvol.o 00:03:00.216 CC lib/blobfs/blobfs.o 00:03:00.216 CC lib/blobfs/tree.o 00:03:01.157 LIB libspdk_bdev.a 00:03:01.157 SO libspdk_bdev.so.16.0 00:03:01.157 LIB libspdk_blobfs.a 00:03:01.157 SO libspdk_blobfs.so.10.0 00:03:01.157 SYMLINK libspdk_bdev.so 00:03:01.418 LIB libspdk_lvol.a 00:03:01.418 SYMLINK libspdk_blobfs.so 00:03:01.418 SO libspdk_lvol.so.10.0 00:03:01.418 SYMLINK libspdk_lvol.so 00:03:01.680 CC lib/nbd/nbd.o 00:03:01.680 CC lib/nbd/nbd_rpc.o 00:03:01.680 CC lib/nvmf/ctrlr.o 00:03:01.680 CC lib/nvmf/ctrlr_discovery.o 00:03:01.680 CC lib/nvmf/ctrlr_bdev.o 00:03:01.680 CC lib/ublk/ublk.o 00:03:01.680 CC lib/ublk/ublk_rpc.o 00:03:01.680 CC lib/nvmf/subsystem.o 00:03:01.680 CC lib/ftl/ftl_core.o 00:03:01.680 CC lib/nvmf/nvmf.o 00:03:01.680 CC lib/ftl/ftl_init.o 00:03:01.680 CC lib/nvmf/nvmf_rpc.o 00:03:01.680 CC lib/ftl/ftl_layout.o 00:03:01.680 CC lib/nvmf/transport.o 00:03:01.680 CC lib/ftl/ftl_debug.o 00:03:01.680 CC lib/nvmf/tcp.o 00:03:01.680 CC lib/ftl/ftl_io.o 00:03:01.680 CC lib/nvmf/stubs.o 00:03:01.680 CC lib/ftl/ftl_sb.o 00:03:01.680 CC lib/scsi/dev.o 00:03:01.680 CC lib/nvmf/mdns_server.o 00:03:01.680 CC lib/ftl/ftl_l2p.o 00:03:01.680 CC lib/scsi/lun.o 00:03:01.680 CC lib/nvmf/rdma.o 00:03:01.680 CC lib/ftl/ftl_l2p_flat.o 00:03:01.680 CC lib/scsi/port.o 00:03:01.680 CC lib/nvmf/auth.o 00:03:01.680 CC lib/scsi/scsi.o 00:03:01.680 CC lib/ftl/ftl_nv_cache.o 00:03:01.680 CC lib/scsi/scsi_bdev.o 00:03:01.680 CC lib/ftl/ftl_band.o 00:03:01.680 CC lib/scsi/scsi_pr.o 00:03:01.680 CC lib/ftl/ftl_band_ops.o 00:03:01.680 CC lib/ftl/ftl_writer.o 00:03:01.680 CC lib/scsi/scsi_rpc.o 00:03:01.680 CC lib/ftl/ftl_rq.o 00:03:01.680 CC lib/scsi/task.o 00:03:01.680 CC lib/ftl/ftl_reloc.o 00:03:01.680 CC lib/ftl/ftl_p2l.o 00:03:01.680 CC lib/ftl/ftl_l2p_cache.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:01.680 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:01.680 CC lib/ftl/utils/ftl_conf.o 00:03:01.680 CC lib/ftl/utils/ftl_md.o 00:03:01.680 CC lib/ftl/utils/ftl_mempool.o 00:03:01.680 CC lib/ftl/utils/ftl_bitmap.o 00:03:01.680 CC lib/ftl/utils/ftl_property.o 00:03:01.680 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:01.680 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:01.680 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:01.680 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:01.680 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:01.680 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:01.680 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:01.680 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:01.680 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:01.680 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:01.680 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:01.680 CC lib/ftl/base/ftl_base_dev.o 00:03:01.680 CC lib/ftl/ftl_trace.o 00:03:01.680 CC lib/ftl/base/ftl_base_bdev.o 00:03:02.248 LIB libspdk_nbd.a 00:03:02.248 SO libspdk_nbd.so.7.0 00:03:02.248 LIB libspdk_scsi.a 00:03:02.248 SO libspdk_scsi.so.9.0 00:03:02.248 SYMLINK libspdk_nbd.so 00:03:02.508 LIB libspdk_ublk.a 00:03:02.508 SYMLINK libspdk_scsi.so 00:03:02.508 SO libspdk_ublk.so.3.0 00:03:02.508 SYMLINK libspdk_ublk.so 00:03:02.768 CC lib/vhost/vhost.o 00:03:02.768 CC lib/iscsi/conn.o 00:03:02.768 CC lib/vhost/vhost_rpc.o 00:03:02.768 CC lib/iscsi/init_grp.o 00:03:02.768 CC lib/vhost/vhost_scsi.o 00:03:02.768 CC lib/iscsi/iscsi.o 00:03:02.768 CC lib/vhost/vhost_blk.o 00:03:02.768 CC lib/iscsi/md5.o 00:03:02.768 CC lib/vhost/rte_vhost_user.o 00:03:02.768 CC lib/iscsi/param.o 00:03:02.768 CC lib/iscsi/portal_grp.o 00:03:02.768 CC lib/iscsi/tgt_node.o 00:03:02.768 CC lib/iscsi/iscsi_subsystem.o 00:03:02.768 CC lib/iscsi/iscsi_rpc.o 00:03:02.768 CC lib/iscsi/task.o 00:03:02.768 LIB libspdk_ftl.a 00:03:03.028 SO libspdk_ftl.so.9.0 00:03:03.289 SYMLINK libspdk_ftl.so 00:03:03.860 LIB libspdk_iscsi.a 00:03:03.860 LIB libspdk_vhost.a 00:03:03.860 SO libspdk_iscsi.so.8.0 00:03:03.860 SO libspdk_vhost.so.8.0 00:03:03.860 LIB libspdk_nvmf.a 00:03:04.120 SYMLINK libspdk_vhost.so 00:03:04.120 SO libspdk_nvmf.so.19.0 00:03:04.120 SYMLINK libspdk_iscsi.so 00:03:04.381 SYMLINK libspdk_nvmf.so 00:03:04.954 CC module/env_dpdk/env_dpdk_rpc.o 00:03:04.954 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:04.954 CC module/keyring/linux/keyring.o 00:03:04.954 CC module/keyring/linux/keyring_rpc.o 00:03:04.954 LIB libspdk_env_dpdk_rpc.a 00:03:04.954 CC module/scheduler/gscheduler/gscheduler.o 00:03:04.954 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:04.954 CC module/blob/bdev/blob_bdev.o 00:03:04.954 CC module/accel/ioat/accel_ioat.o 00:03:04.954 CC module/keyring/file/keyring.o 00:03:04.954 CC module/accel/dsa/accel_dsa.o 00:03:04.954 CC module/accel/ioat/accel_ioat_rpc.o 00:03:04.954 CC module/keyring/file/keyring_rpc.o 00:03:04.954 CC module/accel/dsa/accel_dsa_rpc.o 00:03:04.954 CC module/accel/iaa/accel_iaa.o 00:03:04.954 CC module/accel/iaa/accel_iaa_rpc.o 00:03:04.954 CC module/sock/posix/posix.o 00:03:04.954 CC module/accel/error/accel_error.o 00:03:04.954 CC module/accel/error/accel_error_rpc.o 00:03:04.954 SO libspdk_env_dpdk_rpc.so.6.0 00:03:05.215 SYMLINK libspdk_env_dpdk_rpc.so 00:03:05.215 LIB libspdk_keyring_linux.a 00:03:05.215 LIB libspdk_scheduler_gscheduler.a 00:03:05.215 LIB libspdk_keyring_file.a 00:03:05.215 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.215 LIB libspdk_scheduler_dynamic.a 00:03:05.215 SO libspdk_keyring_linux.so.1.0 00:03:05.215 SO libspdk_keyring_file.so.1.0 00:03:05.215 SO libspdk_scheduler_gscheduler.so.4.0 00:03:05.215 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:05.215 LIB libspdk_accel_ioat.a 00:03:05.215 LIB libspdk_accel_error.a 00:03:05.215 SO libspdk_scheduler_dynamic.so.4.0 00:03:05.215 LIB libspdk_accel_iaa.a 00:03:05.215 LIB libspdk_blob_bdev.a 00:03:05.215 SO libspdk_accel_ioat.so.6.0 00:03:05.215 SYMLINK libspdk_keyring_linux.so 00:03:05.215 SO libspdk_accel_error.so.2.0 00:03:05.215 SO libspdk_accel_iaa.so.3.0 00:03:05.215 SYMLINK libspdk_scheduler_gscheduler.so 00:03:05.215 SYMLINK libspdk_keyring_file.so 00:03:05.215 SYMLINK libspdk_scheduler_dynamic.so 00:03:05.215 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:05.215 LIB libspdk_accel_dsa.a 00:03:05.215 SO libspdk_blob_bdev.so.11.0 00:03:05.476 SYMLINK libspdk_accel_ioat.so 00:03:05.476 SYMLINK libspdk_accel_error.so 00:03:05.476 SO libspdk_accel_dsa.so.5.0 00:03:05.476 SYMLINK libspdk_accel_iaa.so 00:03:05.476 SYMLINK libspdk_blob_bdev.so 00:03:05.476 SYMLINK libspdk_accel_dsa.so 00:03:05.738 LIB libspdk_sock_posix.a 00:03:05.998 SO libspdk_sock_posix.so.6.0 00:03:05.998 CC module/bdev/delay/vbdev_delay.o 00:03:05.998 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:05.998 CC module/bdev/split/vbdev_split.o 00:03:05.998 CC module/bdev/split/vbdev_split_rpc.o 00:03:05.998 CC module/bdev/lvol/vbdev_lvol.o 00:03:05.998 CC module/bdev/error/vbdev_error.o 00:03:05.998 CC module/bdev/error/vbdev_error_rpc.o 00:03:05.998 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:05.998 CC module/bdev/gpt/vbdev_gpt.o 00:03:05.998 CC module/bdev/gpt/gpt.o 00:03:05.998 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.998 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:05.998 CC module/bdev/aio/bdev_aio.o 00:03:05.998 CC module/bdev/raid/bdev_raid.o 00:03:05.998 CC module/bdev/raid/bdev_raid_rpc.o 00:03:05.998 CC module/bdev/raid/bdev_raid_sb.o 00:03:05.998 CC module/bdev/aio/bdev_aio_rpc.o 00:03:05.998 CC module/bdev/malloc/bdev_malloc.o 00:03:05.998 CC module/bdev/raid/raid0.o 00:03:05.998 CC module/bdev/raid/raid1.o 00:03:05.998 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:05.998 CC module/bdev/raid/concat.o 00:03:05.998 CC module/bdev/iscsi/bdev_iscsi.o 00:03:05.998 CC module/bdev/passthru/vbdev_passthru.o 00:03:05.998 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:05.998 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:05.998 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:05.998 CC module/bdev/nvme/bdev_nvme.o 00:03:05.998 CC module/bdev/ftl/bdev_ftl.o 00:03:05.998 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:05.998 CC module/bdev/null/bdev_null.o 00:03:05.998 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:05.998 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:05.998 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:05.998 CC module/bdev/null/bdev_null_rpc.o 00:03:05.998 CC module/bdev/nvme/nvme_rpc.o 00:03:05.998 CC module/bdev/nvme/bdev_mdns_client.o 00:03:05.998 CC module/bdev/nvme/vbdev_opal.o 00:03:05.998 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:05.998 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:05.998 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:05.998 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:05.998 SYMLINK libspdk_sock_posix.so 00:03:06.257 LIB libspdk_blobfs_bdev.a 00:03:06.257 LIB libspdk_bdev_split.a 00:03:06.257 SO libspdk_blobfs_bdev.so.6.0 00:03:06.257 SO libspdk_bdev_split.so.6.0 00:03:06.257 LIB libspdk_bdev_error.a 00:03:06.257 SYMLINK libspdk_blobfs_bdev.so 00:03:06.257 LIB libspdk_bdev_gpt.a 00:03:06.257 LIB libspdk_bdev_null.a 00:03:06.257 LIB libspdk_bdev_ftl.a 00:03:06.257 SO libspdk_bdev_error.so.6.0 00:03:06.257 LIB libspdk_bdev_passthru.a 00:03:06.257 SYMLINK libspdk_bdev_split.so 00:03:06.257 SO libspdk_bdev_gpt.so.6.0 00:03:06.257 SO libspdk_bdev_null.so.6.0 00:03:06.257 LIB libspdk_bdev_aio.a 00:03:06.257 SO libspdk_bdev_ftl.so.6.0 00:03:06.257 SO libspdk_bdev_passthru.so.6.0 00:03:06.257 LIB libspdk_bdev_zone_block.a 00:03:06.257 LIB libspdk_bdev_delay.a 00:03:06.257 LIB libspdk_bdev_iscsi.a 00:03:06.257 SO libspdk_bdev_aio.so.6.0 00:03:06.257 SYMLINK libspdk_bdev_error.so 00:03:06.257 LIB libspdk_bdev_malloc.a 00:03:06.518 SYMLINK libspdk_bdev_gpt.so 00:03:06.518 SO libspdk_bdev_zone_block.so.6.0 00:03:06.518 SO libspdk_bdev_delay.so.6.0 00:03:06.518 SO libspdk_bdev_iscsi.so.6.0 00:03:06.518 SYMLINK libspdk_bdev_null.so 00:03:06.518 SYMLINK libspdk_bdev_passthru.so 00:03:06.518 SO libspdk_bdev_malloc.so.6.0 00:03:06.518 SYMLINK libspdk_bdev_ftl.so 00:03:06.518 SYMLINK libspdk_bdev_zone_block.so 00:03:06.518 SYMLINK libspdk_bdev_aio.so 00:03:06.518 SYMLINK libspdk_bdev_iscsi.so 00:03:06.518 SYMLINK libspdk_bdev_delay.so 00:03:06.518 SYMLINK libspdk_bdev_malloc.so 00:03:06.518 LIB libspdk_bdev_lvol.a 00:03:06.518 SO libspdk_bdev_lvol.so.6.0 00:03:06.518 LIB libspdk_bdev_virtio.a 00:03:06.518 SO libspdk_bdev_virtio.so.6.0 00:03:06.518 SYMLINK libspdk_bdev_lvol.so 00:03:06.779 SYMLINK libspdk_bdev_virtio.so 00:03:07.039 LIB libspdk_bdev_raid.a 00:03:07.039 SO libspdk_bdev_raid.so.6.0 00:03:07.299 SYMLINK libspdk_bdev_raid.so 00:03:08.683 LIB libspdk_bdev_nvme.a 00:03:08.683 SO libspdk_bdev_nvme.so.7.0 00:03:08.683 SYMLINK libspdk_bdev_nvme.so 00:03:09.258 CC module/event/subsystems/iobuf/iobuf.o 00:03:09.258 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:09.258 CC module/event/subsystems/vmd/vmd.o 00:03:09.258 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:09.258 CC module/event/subsystems/keyring/keyring.o 00:03:09.258 CC module/event/subsystems/sock/sock.o 00:03:09.258 CC module/event/subsystems/scheduler/scheduler.o 00:03:09.258 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:09.552 LIB libspdk_event_scheduler.a 00:03:09.552 LIB libspdk_event_keyring.a 00:03:09.552 LIB libspdk_event_vmd.a 00:03:09.552 LIB libspdk_event_sock.a 00:03:09.552 LIB libspdk_event_vhost_blk.a 00:03:09.552 LIB libspdk_event_iobuf.a 00:03:09.552 SO libspdk_event_keyring.so.1.0 00:03:09.552 SO libspdk_event_scheduler.so.4.0 00:03:09.552 SO libspdk_event_vmd.so.6.0 00:03:09.552 SO libspdk_event_sock.so.5.0 00:03:09.552 SO libspdk_event_vhost_blk.so.3.0 00:03:09.552 SO libspdk_event_iobuf.so.3.0 00:03:09.552 SYMLINK libspdk_event_keyring.so 00:03:09.552 SYMLINK libspdk_event_scheduler.so 00:03:09.552 SYMLINK libspdk_event_sock.so 00:03:09.552 SYMLINK libspdk_event_vmd.so 00:03:09.552 SYMLINK libspdk_event_vhost_blk.so 00:03:09.552 SYMLINK libspdk_event_iobuf.so 00:03:09.841 CC module/event/subsystems/accel/accel.o 00:03:10.103 LIB libspdk_event_accel.a 00:03:10.103 SO libspdk_event_accel.so.6.0 00:03:10.103 SYMLINK libspdk_event_accel.so 00:03:10.676 CC module/event/subsystems/bdev/bdev.o 00:03:10.676 LIB libspdk_event_bdev.a 00:03:10.676 SO libspdk_event_bdev.so.6.0 00:03:10.937 SYMLINK libspdk_event_bdev.so 00:03:11.198 CC module/event/subsystems/nbd/nbd.o 00:03:11.198 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:11.198 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:11.198 CC module/event/subsystems/ublk/ublk.o 00:03:11.198 CC module/event/subsystems/scsi/scsi.o 00:03:11.198 LIB libspdk_event_nbd.a 00:03:11.198 LIB libspdk_event_ublk.a 00:03:11.472 SO libspdk_event_nbd.so.6.0 00:03:11.472 LIB libspdk_event_scsi.a 00:03:11.472 SO libspdk_event_ublk.so.3.0 00:03:11.472 SO libspdk_event_scsi.so.6.0 00:03:11.472 LIB libspdk_event_nvmf.a 00:03:11.472 SYMLINK libspdk_event_nbd.so 00:03:11.472 SO libspdk_event_nvmf.so.6.0 00:03:11.472 SYMLINK libspdk_event_ublk.so 00:03:11.472 SYMLINK libspdk_event_scsi.so 00:03:11.472 SYMLINK libspdk_event_nvmf.so 00:03:11.733 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:11.733 CC module/event/subsystems/iscsi/iscsi.o 00:03:11.994 LIB libspdk_event_vhost_scsi.a 00:03:11.994 LIB libspdk_event_iscsi.a 00:03:11.994 SO libspdk_event_vhost_scsi.so.3.0 00:03:11.994 SO libspdk_event_iscsi.so.6.0 00:03:11.994 SYMLINK libspdk_event_vhost_scsi.so 00:03:11.994 SYMLINK libspdk_event_iscsi.so 00:03:12.255 SO libspdk.so.6.0 00:03:12.256 SYMLINK libspdk.so 00:03:12.827 CC app/trace_record/trace_record.o 00:03:12.827 CC app/spdk_nvme_identify/identify.o 00:03:12.827 CC app/spdk_lspci/spdk_lspci.o 00:03:12.827 CXX app/trace/trace.o 00:03:12.827 TEST_HEADER include/spdk/accel.h 00:03:12.827 TEST_HEADER include/spdk/accel_module.h 00:03:12.827 CC app/spdk_nvme_perf/perf.o 00:03:12.827 TEST_HEADER include/spdk/assert.h 00:03:12.827 TEST_HEADER include/spdk/barrier.h 00:03:12.827 TEST_HEADER include/spdk/base64.h 00:03:12.827 CC app/spdk_top/spdk_top.o 00:03:12.827 TEST_HEADER include/spdk/bdev.h 00:03:12.827 TEST_HEADER include/spdk/bdev_module.h 00:03:12.827 TEST_HEADER include/spdk/bdev_zone.h 00:03:12.827 TEST_HEADER include/spdk/bit_array.h 00:03:12.827 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:12.827 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.827 TEST_HEADER include/spdk/bit_pool.h 00:03:12.827 CC test/rpc_client/rpc_client_test.o 00:03:12.827 TEST_HEADER include/spdk/blob_bdev.h 00:03:12.827 TEST_HEADER include/spdk/blob.h 00:03:12.827 TEST_HEADER include/spdk/blobfs.h 00:03:12.827 TEST_HEADER include/spdk/conf.h 00:03:12.827 TEST_HEADER include/spdk/cpuset.h 00:03:12.827 TEST_HEADER include/spdk/config.h 00:03:12.827 TEST_HEADER include/spdk/crc32.h 00:03:12.827 TEST_HEADER include/spdk/crc16.h 00:03:12.827 TEST_HEADER include/spdk/crc64.h 00:03:12.827 TEST_HEADER include/spdk/dif.h 00:03:12.827 TEST_HEADER include/spdk/dma.h 00:03:12.827 TEST_HEADER include/spdk/endian.h 00:03:12.827 TEST_HEADER include/spdk/env_dpdk.h 00:03:12.827 TEST_HEADER include/spdk/event.h 00:03:12.827 TEST_HEADER include/spdk/env.h 00:03:12.827 TEST_HEADER include/spdk/fd.h 00:03:12.827 TEST_HEADER include/spdk/fd_group.h 00:03:12.827 CC app/spdk_dd/spdk_dd.o 00:03:12.827 TEST_HEADER include/spdk/file.h 00:03:12.827 TEST_HEADER include/spdk/ftl.h 00:03:12.827 TEST_HEADER include/spdk/gpt_spec.h 00:03:12.827 TEST_HEADER include/spdk/hexlify.h 00:03:12.827 TEST_HEADER include/spdk/histogram_data.h 00:03:12.827 TEST_HEADER include/spdk/idxd.h 00:03:12.827 TEST_HEADER include/spdk/idxd_spec.h 00:03:12.827 TEST_HEADER include/spdk/ioat.h 00:03:12.827 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.827 TEST_HEADER include/spdk/init.h 00:03:12.827 TEST_HEADER include/spdk/ioat_spec.h 00:03:12.827 TEST_HEADER include/spdk/iscsi_spec.h 00:03:12.827 CC app/nvmf_tgt/nvmf_main.o 00:03:12.827 TEST_HEADER include/spdk/json.h 00:03:12.827 TEST_HEADER include/spdk/jsonrpc.h 00:03:12.827 TEST_HEADER include/spdk/keyring.h 00:03:12.827 TEST_HEADER include/spdk/likely.h 00:03:12.827 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:12.827 TEST_HEADER include/spdk/keyring_module.h 00:03:12.827 CC app/spdk_tgt/spdk_tgt.o 00:03:12.827 TEST_HEADER include/spdk/log.h 00:03:12.827 TEST_HEADER include/spdk/lvol.h 00:03:12.827 TEST_HEADER include/spdk/memory.h 00:03:12.827 TEST_HEADER include/spdk/mmio.h 00:03:12.827 TEST_HEADER include/spdk/nbd.h 00:03:12.827 TEST_HEADER include/spdk/net.h 00:03:12.827 TEST_HEADER include/spdk/notify.h 00:03:12.827 TEST_HEADER include/spdk/nvme.h 00:03:12.827 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:12.827 TEST_HEADER include/spdk/nvme_intel.h 00:03:12.827 TEST_HEADER include/spdk/nvme_spec.h 00:03:12.827 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:12.827 TEST_HEADER include/spdk/nvme_zns.h 00:03:12.827 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:12.827 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:12.827 TEST_HEADER include/spdk/nvmf_transport.h 00:03:12.827 TEST_HEADER include/spdk/nvmf.h 00:03:12.827 TEST_HEADER include/spdk/nvmf_spec.h 00:03:12.827 TEST_HEADER include/spdk/opal.h 00:03:12.827 TEST_HEADER include/spdk/opal_spec.h 00:03:12.827 TEST_HEADER include/spdk/pci_ids.h 00:03:12.827 TEST_HEADER include/spdk/pipe.h 00:03:12.827 TEST_HEADER include/spdk/queue.h 00:03:12.827 TEST_HEADER include/spdk/reduce.h 00:03:12.827 TEST_HEADER include/spdk/rpc.h 00:03:12.827 TEST_HEADER include/spdk/scheduler.h 00:03:12.827 TEST_HEADER include/spdk/scsi.h 00:03:12.827 TEST_HEADER include/spdk/scsi_spec.h 00:03:12.827 TEST_HEADER include/spdk/sock.h 00:03:12.827 TEST_HEADER include/spdk/stdinc.h 00:03:12.827 TEST_HEADER include/spdk/string.h 00:03:12.827 TEST_HEADER include/spdk/thread.h 00:03:12.827 TEST_HEADER include/spdk/trace.h 00:03:12.827 TEST_HEADER include/spdk/trace_parser.h 00:03:12.827 TEST_HEADER include/spdk/tree.h 00:03:12.827 TEST_HEADER include/spdk/ublk.h 00:03:12.827 TEST_HEADER include/spdk/util.h 00:03:12.827 TEST_HEADER include/spdk/uuid.h 00:03:12.827 TEST_HEADER include/spdk/version.h 00:03:12.827 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:12.827 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:12.827 TEST_HEADER include/spdk/vhost.h 00:03:12.827 TEST_HEADER include/spdk/vmd.h 00:03:12.827 TEST_HEADER include/spdk/xor.h 00:03:12.827 TEST_HEADER include/spdk/zipf.h 00:03:12.827 CXX test/cpp_headers/accel.o 00:03:12.827 CXX test/cpp_headers/assert.o 00:03:12.827 CXX test/cpp_headers/accel_module.o 00:03:12.827 CXX test/cpp_headers/barrier.o 00:03:12.827 CXX test/cpp_headers/base64.o 00:03:12.827 CXX test/cpp_headers/bdev.o 00:03:12.827 CXX test/cpp_headers/bdev_module.o 00:03:12.827 CXX test/cpp_headers/bdev_zone.o 00:03:12.827 CXX test/cpp_headers/bit_array.o 00:03:12.827 CXX test/cpp_headers/bit_pool.o 00:03:12.827 CXX test/cpp_headers/blob_bdev.o 00:03:12.827 CXX test/cpp_headers/blobfs_bdev.o 00:03:12.827 CXX test/cpp_headers/blobfs.o 00:03:12.827 CXX test/cpp_headers/blob.o 00:03:12.827 CXX test/cpp_headers/conf.o 00:03:12.827 CXX test/cpp_headers/config.o 00:03:12.828 CXX test/cpp_headers/cpuset.o 00:03:12.828 CXX test/cpp_headers/crc16.o 00:03:12.828 CXX test/cpp_headers/crc64.o 00:03:12.828 CXX test/cpp_headers/crc32.o 00:03:12.828 CXX test/cpp_headers/dif.o 00:03:12.828 CXX test/cpp_headers/dma.o 00:03:12.828 CXX test/cpp_headers/env_dpdk.o 00:03:12.828 CXX test/cpp_headers/endian.o 00:03:12.828 CXX test/cpp_headers/env.o 00:03:12.828 CXX test/cpp_headers/event.o 00:03:12.828 CXX test/cpp_headers/fd_group.o 00:03:12.828 CXX test/cpp_headers/fd.o 00:03:12.828 CXX test/cpp_headers/file.o 00:03:12.828 CXX test/cpp_headers/ftl.o 00:03:12.828 CXX test/cpp_headers/gpt_spec.o 00:03:12.828 CXX test/cpp_headers/histogram_data.o 00:03:12.828 CXX test/cpp_headers/hexlify.o 00:03:12.828 CXX test/cpp_headers/idxd.o 00:03:12.828 CXX test/cpp_headers/idxd_spec.o 00:03:12.828 CXX test/cpp_headers/ioat.o 00:03:12.828 CXX test/cpp_headers/init.o 00:03:12.828 CXX test/cpp_headers/ioat_spec.o 00:03:12.828 CXX test/cpp_headers/json.o 00:03:12.828 CXX test/cpp_headers/iscsi_spec.o 00:03:12.828 CXX test/cpp_headers/jsonrpc.o 00:03:12.828 CXX test/cpp_headers/keyring_module.o 00:03:12.828 CXX test/cpp_headers/keyring.o 00:03:12.828 CXX test/cpp_headers/likely.o 00:03:12.828 CXX test/cpp_headers/log.o 00:03:12.828 CXX test/cpp_headers/mmio.o 00:03:12.828 CXX test/cpp_headers/memory.o 00:03:12.828 CXX test/cpp_headers/lvol.o 00:03:12.828 CXX test/cpp_headers/nbd.o 00:03:12.828 CXX test/cpp_headers/nvme.o 00:03:12.828 CXX test/cpp_headers/net.o 00:03:12.828 CXX test/cpp_headers/notify.o 00:03:12.828 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:12.828 CXX test/cpp_headers/nvme_intel.o 00:03:12.828 CXX test/cpp_headers/nvme_ocssd.o 00:03:12.828 CXX test/cpp_headers/nvmf_cmd.o 00:03:12.828 CXX test/cpp_headers/nvme_spec.o 00:03:12.828 CXX test/cpp_headers/nvme_zns.o 00:03:12.828 CXX test/cpp_headers/nvmf.o 00:03:12.828 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:12.828 CXX test/cpp_headers/nvmf_transport.o 00:03:12.828 CXX test/cpp_headers/nvmf_spec.o 00:03:12.828 CXX test/cpp_headers/opal.o 00:03:12.828 CXX test/cpp_headers/opal_spec.o 00:03:12.828 CXX test/cpp_headers/pci_ids.o 00:03:12.828 CXX test/cpp_headers/pipe.o 00:03:12.828 LINK spdk_lspci 00:03:12.828 CXX test/cpp_headers/reduce.o 00:03:12.828 CXX test/cpp_headers/rpc.o 00:03:12.828 CXX test/cpp_headers/queue.o 00:03:12.828 CXX test/cpp_headers/scheduler.o 00:03:12.828 CXX test/cpp_headers/scsi.o 00:03:12.828 CXX test/cpp_headers/scsi_spec.o 00:03:12.828 CXX test/cpp_headers/stdinc.o 00:03:12.828 CXX test/cpp_headers/sock.o 00:03:12.828 CXX test/cpp_headers/string.o 00:03:12.828 CXX test/cpp_headers/thread.o 00:03:12.828 CXX test/cpp_headers/trace_parser.o 00:03:12.828 CC test/env/pci/pci_ut.o 00:03:12.828 CXX test/cpp_headers/trace.o 00:03:12.828 CXX test/cpp_headers/tree.o 00:03:12.828 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:12.828 CXX test/cpp_headers/util.o 00:03:12.828 CXX test/cpp_headers/ublk.o 00:03:12.828 CXX test/cpp_headers/uuid.o 00:03:12.828 CXX test/cpp_headers/version.o 00:03:12.828 CXX test/cpp_headers/vhost.o 00:03:12.828 CXX test/cpp_headers/vfio_user_pci.o 00:03:12.828 CC examples/util/zipf/zipf.o 00:03:12.828 CC test/env/vtophys/vtophys.o 00:03:12.828 CXX test/cpp_headers/vmd.o 00:03:12.828 CXX test/cpp_headers/vfio_user_spec.o 00:03:12.828 CXX test/cpp_headers/zipf.o 00:03:12.828 CXX test/cpp_headers/xor.o 00:03:12.828 CC examples/ioat/perf/perf.o 00:03:12.828 CC test/app/stub/stub.o 00:03:13.090 CC test/env/memory/memory_ut.o 00:03:13.090 CC test/thread/poller_perf/poller_perf.o 00:03:13.090 CC examples/ioat/verify/verify.o 00:03:13.090 CC test/app/histogram_perf/histogram_perf.o 00:03:13.090 CC app/fio/nvme/fio_plugin.o 00:03:13.090 CC test/app/jsoncat/jsoncat.o 00:03:13.090 CC test/dma/test_dma/test_dma.o 00:03:13.090 LINK spdk_nvme_discover 00:03:13.090 LINK rpc_client_test 00:03:13.090 CC app/fio/bdev/fio_plugin.o 00:03:13.090 LINK nvmf_tgt 00:03:13.090 CC test/app/bdev_svc/bdev_svc.o 00:03:13.090 LINK interrupt_tgt 00:03:13.090 LINK iscsi_tgt 00:03:13.090 LINK spdk_trace_record 00:03:13.349 LINK spdk_tgt 00:03:13.349 CC test/env/mem_callbacks/mem_callbacks.o 00:03:13.349 LINK spdk_dd 00:03:13.349 LINK spdk_trace 00:03:13.349 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:13.349 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:13.349 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:13.349 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:13.349 LINK vtophys 00:03:13.608 LINK histogram_perf 00:03:13.608 LINK zipf 00:03:13.608 LINK poller_perf 00:03:13.608 LINK jsoncat 00:03:13.608 LINK env_dpdk_post_init 00:03:13.608 LINK stub 00:03:13.608 LINK bdev_svc 00:03:13.608 LINK ioat_perf 00:03:13.608 LINK verify 00:03:13.608 CC app/vhost/vhost.o 00:03:13.868 LINK test_dma 00:03:13.868 LINK pci_ut 00:03:13.868 LINK spdk_nvme_perf 00:03:13.868 LINK nvme_fuzz 00:03:13.868 LINK vhost_fuzz 00:03:13.868 LINK vhost 00:03:13.868 CC test/event/reactor/reactor.o 00:03:13.868 CC test/event/reactor_perf/reactor_perf.o 00:03:13.868 CC test/event/event_perf/event_perf.o 00:03:13.868 CC examples/idxd/perf/perf.o 00:03:13.868 LINK spdk_nvme 00:03:14.128 LINK mem_callbacks 00:03:14.128 CC examples/sock/hello_world/hello_sock.o 00:03:14.128 CC examples/vmd/led/led.o 00:03:14.128 CC examples/vmd/lsvmd/lsvmd.o 00:03:14.128 CC test/event/app_repeat/app_repeat.o 00:03:14.128 LINK spdk_bdev 00:03:14.128 LINK spdk_nvme_identify 00:03:14.128 CC test/event/scheduler/scheduler.o 00:03:14.128 CC examples/thread/thread/thread_ex.o 00:03:14.128 LINK spdk_top 00:03:14.128 LINK reactor 00:03:14.129 LINK led 00:03:14.129 LINK reactor_perf 00:03:14.129 LINK event_perf 00:03:14.129 LINK lsvmd 00:03:14.129 LINK app_repeat 00:03:14.129 LINK hello_sock 00:03:14.390 LINK scheduler 00:03:14.390 LINK thread 00:03:14.390 CC test/nvme/sgl/sgl.o 00:03:14.390 CC test/nvme/err_injection/err_injection.o 00:03:14.390 LINK idxd_perf 00:03:14.390 CC test/nvme/overhead/overhead.o 00:03:14.390 CC test/nvme/reset/reset.o 00:03:14.390 CC test/nvme/aer/aer.o 00:03:14.390 CC test/nvme/simple_copy/simple_copy.o 00:03:14.390 CC test/nvme/connect_stress/connect_stress.o 00:03:14.390 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:14.390 CC test/nvme/cuse/cuse.o 00:03:14.390 CC test/nvme/fdp/fdp.o 00:03:14.390 CC test/nvme/e2edp/nvme_dp.o 00:03:14.390 CC test/nvme/reserve/reserve.o 00:03:14.390 CC test/nvme/boot_partition/boot_partition.o 00:03:14.390 CC test/nvme/compliance/nvme_compliance.o 00:03:14.390 CC test/accel/dif/dif.o 00:03:14.390 CC test/nvme/fused_ordering/fused_ordering.o 00:03:14.390 CC test/nvme/startup/startup.o 00:03:14.390 CC test/blobfs/mkfs/mkfs.o 00:03:14.649 CC test/lvol/esnap/esnap.o 00:03:14.649 LINK memory_ut 00:03:14.649 LINK sgl 00:03:14.649 LINK err_injection 00:03:14.649 LINK boot_partition 00:03:14.649 LINK startup 00:03:14.649 LINK connect_stress 00:03:14.649 LINK doorbell_aers 00:03:14.649 LINK reserve 00:03:14.649 LINK fused_ordering 00:03:14.649 LINK mkfs 00:03:14.649 LINK simple_copy 00:03:14.649 CC examples/nvme/reconnect/reconnect.o 00:03:14.649 CC examples/nvme/hello_world/hello_world.o 00:03:14.649 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:14.649 CC examples/nvme/abort/abort.o 00:03:14.649 LINK reset 00:03:14.649 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:14.649 CC examples/nvme/hotplug/hotplug.o 00:03:14.649 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:14.649 LINK overhead 00:03:14.649 CC examples/nvme/arbitration/arbitration.o 00:03:14.649 LINK aer 00:03:14.649 LINK nvme_dp 00:03:14.649 LINK fdp 00:03:14.649 LINK nvme_compliance 00:03:14.910 CC examples/accel/perf/accel_perf.o 00:03:14.910 CC examples/blob/cli/blobcli.o 00:03:14.910 CC examples/blob/hello_world/hello_blob.o 00:03:14.910 LINK cmb_copy 00:03:14.910 LINK pmr_persistence 00:03:14.910 LINK dif 00:03:14.910 LINK hello_world 00:03:14.910 LINK hotplug 00:03:14.910 LINK reconnect 00:03:14.910 LINK arbitration 00:03:15.170 LINK abort 00:03:15.170 LINK hello_blob 00:03:15.170 LINK iscsi_fuzz 00:03:15.170 LINK nvme_manage 00:03:15.431 LINK accel_perf 00:03:15.431 LINK blobcli 00:03:15.431 CC test/bdev/bdevio/bdevio.o 00:03:15.692 LINK cuse 00:03:15.953 CC examples/bdev/hello_world/hello_bdev.o 00:03:15.953 LINK bdevio 00:03:15.953 CC examples/bdev/bdevperf/bdevperf.o 00:03:16.214 LINK hello_bdev 00:03:16.784 LINK bdevperf 00:03:17.354 CC examples/nvmf/nvmf/nvmf.o 00:03:17.615 LINK nvmf 00:03:18.558 LINK esnap 00:03:19.132 00:03:19.132 real 0m53.853s 00:03:19.132 user 6m54.596s 00:03:19.132 sys 4m1.271s 00:03:19.132 20:10:30 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:19.132 20:10:30 make -- common/autotest_common.sh@10 -- $ set +x 00:03:19.132 ************************************ 00:03:19.132 END TEST make 00:03:19.132 ************************************ 00:03:19.132 20:10:30 -- common/autotest_common.sh@1142 -- $ return 0 00:03:19.132 20:10:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:19.132 20:10:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:19.132 20:10:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:19.132 20:10:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.132 20:10:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:19.132 20:10:30 -- pm/common@44 -- $ pid=3255282 00:03:19.132 20:10:30 -- pm/common@50 -- $ kill -TERM 3255282 00:03:19.132 20:10:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.132 20:10:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:19.132 20:10:30 -- pm/common@44 -- $ pid=3255283 00:03:19.132 20:10:30 -- pm/common@50 -- $ kill -TERM 3255283 00:03:19.132 20:10:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.132 20:10:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:19.132 20:10:30 -- pm/common@44 -- $ pid=3255285 00:03:19.132 20:10:30 -- pm/common@50 -- $ kill -TERM 3255285 00:03:19.132 20:10:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.132 20:10:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:19.132 20:10:30 -- pm/common@44 -- $ pid=3255308 00:03:19.132 20:10:30 -- pm/common@50 -- $ sudo -E kill -TERM 3255308 00:03:19.132 20:10:31 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:19.132 20:10:31 -- nvmf/common.sh@7 -- # uname -s 00:03:19.132 20:10:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:19.132 20:10:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:19.132 20:10:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:19.132 20:10:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:19.132 20:10:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:19.132 20:10:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:19.132 20:10:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:19.132 20:10:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:19.132 20:10:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:19.132 20:10:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:19.132 20:10:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:19.133 20:10:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:19.133 20:10:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:19.133 20:10:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:19.133 20:10:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:19.133 20:10:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:19.133 20:10:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:19.133 20:10:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:19.133 20:10:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:19.133 20:10:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:19.133 20:10:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.133 20:10:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.133 20:10:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.133 20:10:31 -- paths/export.sh@5 -- # export PATH 00:03:19.133 20:10:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.133 20:10:31 -- nvmf/common.sh@47 -- # : 0 00:03:19.133 20:10:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:19.133 20:10:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:19.133 20:10:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:19.133 20:10:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:19.133 20:10:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:19.133 20:10:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:19.133 20:10:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:19.133 20:10:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:19.133 20:10:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:19.133 20:10:31 -- spdk/autotest.sh@32 -- # uname -s 00:03:19.133 20:10:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:19.133 20:10:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:19.133 20:10:31 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:19.133 20:10:31 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:19.133 20:10:31 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:19.133 20:10:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:19.133 20:10:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:19.133 20:10:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:19.133 20:10:31 -- spdk/autotest.sh@48 -- # udevadm_pid=3319106 00:03:19.133 20:10:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:19.133 20:10:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:19.133 20:10:31 -- pm/common@17 -- # local monitor 00:03:19.133 20:10:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.133 20:10:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.133 20:10:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.133 20:10:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.133 20:10:31 -- pm/common@21 -- # date +%s 00:03:19.133 20:10:31 -- pm/common@21 -- # date +%s 00:03:19.133 20:10:31 -- pm/common@25 -- # sleep 1 00:03:19.133 20:10:31 -- pm/common@21 -- # date +%s 00:03:19.133 20:10:31 -- pm/common@21 -- # date +%s 00:03:19.133 20:10:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721671831 00:03:19.133 20:10:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721671831 00:03:19.133 20:10:31 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721671831 00:03:19.133 20:10:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721671831 00:03:19.133 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721671831_collect-vmstat.pm.log 00:03:19.133 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721671831_collect-cpu-temp.pm.log 00:03:19.133 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721671831_collect-cpu-load.pm.log 00:03:19.394 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721671831_collect-bmc-pm.bmc.pm.log 00:03:20.338 20:10:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:20.338 20:10:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:20.338 20:10:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:20.338 20:10:32 -- common/autotest_common.sh@10 -- # set +x 00:03:20.338 20:10:32 -- spdk/autotest.sh@59 -- # create_test_list 00:03:20.338 20:10:32 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:20.338 20:10:32 -- common/autotest_common.sh@10 -- # set +x 00:03:20.338 20:10:32 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:20.338 20:10:32 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:20.338 20:10:32 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:20.338 20:10:32 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:20.338 20:10:32 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:20.338 20:10:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:20.338 20:10:32 -- common/autotest_common.sh@1455 -- # uname 00:03:20.338 20:10:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:20.338 20:10:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:20.338 20:10:32 -- common/autotest_common.sh@1475 -- # uname 00:03:20.338 20:10:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:20.338 20:10:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:20.338 20:10:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:20.338 20:10:32 -- spdk/autotest.sh@72 -- # hash lcov 00:03:20.338 20:10:32 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:20.338 20:10:32 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:20.338 --rc lcov_branch_coverage=1 00:03:20.338 --rc lcov_function_coverage=1 00:03:20.338 --rc genhtml_branch_coverage=1 00:03:20.338 --rc genhtml_function_coverage=1 00:03:20.338 --rc genhtml_legend=1 00:03:20.338 --rc geninfo_all_blocks=1 00:03:20.338 ' 00:03:20.338 20:10:32 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:20.338 --rc lcov_branch_coverage=1 00:03:20.338 --rc lcov_function_coverage=1 00:03:20.338 --rc genhtml_branch_coverage=1 00:03:20.338 --rc genhtml_function_coverage=1 00:03:20.338 --rc genhtml_legend=1 00:03:20.338 --rc geninfo_all_blocks=1 00:03:20.338 ' 00:03:20.338 20:10:32 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:20.338 --rc lcov_branch_coverage=1 00:03:20.338 --rc lcov_function_coverage=1 00:03:20.338 --rc genhtml_branch_coverage=1 00:03:20.338 --rc genhtml_function_coverage=1 00:03:20.338 --rc genhtml_legend=1 00:03:20.338 --rc geninfo_all_blocks=1 00:03:20.338 --no-external' 00:03:20.338 20:10:32 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:20.338 --rc lcov_branch_coverage=1 00:03:20.338 --rc lcov_function_coverage=1 00:03:20.338 --rc genhtml_branch_coverage=1 00:03:20.338 --rc genhtml_function_coverage=1 00:03:20.338 --rc genhtml_legend=1 00:03:20.338 --rc geninfo_all_blocks=1 00:03:20.338 --no-external' 00:03:20.338 20:10:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:20.338 lcov: LCOV version 1.14 00:03:20.338 20:10:32 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:30.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:30.341 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:30.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:30.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:30.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:30.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:43.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:43.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:49.740 20:11:00 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:49.740 20:11:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:49.740 20:11:00 -- common/autotest_common.sh@10 -- # set +x 00:03:49.740 20:11:00 -- spdk/autotest.sh@91 -- # rm -f 00:03:49.740 20:11:00 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.284 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:52.284 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:52.284 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:52.544 20:11:04 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:52.544 20:11:04 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:52.544 20:11:04 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:52.544 20:11:04 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:52.544 20:11:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.544 20:11:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:52.544 20:11:04 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:52.544 20:11:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.544 20:11:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.544 20:11:04 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:52.544 20:11:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.544 20:11:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:52.544 20:11:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:52.544 20:11:04 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:52.544 20:11:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:52.805 No valid GPT data, bailing 00:03:52.805 20:11:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:52.805 20:11:04 -- scripts/common.sh@391 -- # pt= 00:03:52.805 20:11:04 -- scripts/common.sh@392 -- # return 1 00:03:52.805 20:11:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:52.805 1+0 records in 00:03:52.805 1+0 records out 00:03:52.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422118 s, 248 MB/s 00:03:52.805 20:11:04 -- spdk/autotest.sh@118 -- # sync 00:03:52.805 20:11:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:52.805 20:11:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:52.805 20:11:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:00.946 20:11:12 -- spdk/autotest.sh@124 -- # uname -s 00:04:00.946 20:11:12 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:00.946 20:11:12 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:00.946 20:11:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.946 20:11:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.946 20:11:12 -- common/autotest_common.sh@10 -- # set +x 00:04:00.946 ************************************ 00:04:00.946 START TEST setup.sh 00:04:00.946 ************************************ 00:04:00.946 20:11:12 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:00.946 * Looking for test storage... 00:04:00.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:00.946 20:11:12 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:00.946 20:11:12 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:00.947 20:11:12 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:00.947 20:11:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.947 20:11:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.947 20:11:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:00.947 ************************************ 00:04:00.947 START TEST acl 00:04:00.947 ************************************ 00:04:00.947 20:11:12 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:00.947 * Looking for test storage... 00:04:00.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:00.947 20:11:12 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:00.947 20:11:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:00.947 20:11:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:00.947 20:11:12 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:00.947 20:11:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:00.947 20:11:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:00.947 20:11:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:00.947 20:11:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.947 20:11:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:00.947 20:11:12 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:00.947 20:11:12 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:00.947 20:11:12 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:00.947 20:11:12 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:00.947 20:11:12 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:00.947 20:11:12 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.947 20:11:12 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.154 20:11:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:05.154 20:11:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:05.154 20:11:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.154 20:11:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:05.154 20:11:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.154 20:11:16 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:08.456 Hugepages 00:04:08.456 node hugesize free / total 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 00:04:08.456 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:08.456 20:11:20 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:08.456 20:11:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.456 20:11:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.456 20:11:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:08.456 ************************************ 00:04:08.456 START TEST denied 00:04:08.456 ************************************ 00:04:08.456 20:11:20 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:08.456 20:11:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:08.456 20:11:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:08.456 20:11:20 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:08.456 20:11:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.456 20:11:20 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.661 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:04:12.661 20:11:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:04:12.661 20:11:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:12.661 20:11:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:12.661 20:11:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:04:12.661 20:11:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:04:12.661 20:11:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:12.661 20:11:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:12.661 20:11:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:12.661 20:11:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.661 20:11:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.867 00:04:16.867 real 0m8.365s 00:04:16.867 user 0m2.800s 00:04:16.867 sys 0m4.832s 00:04:16.867 20:11:28 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.867 20:11:28 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:16.867 ************************************ 00:04:16.867 END TEST denied 00:04:16.867 ************************************ 00:04:16.867 20:11:28 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:16.867 20:11:28 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:16.867 20:11:28 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.867 20:11:28 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.867 20:11:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:16.867 ************************************ 00:04:16.867 START TEST allowed 00:04:16.867 ************************************ 00:04:16.867 20:11:28 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:16.867 20:11:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:04:16.867 20:11:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:16.867 20:11:28 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:04:16.867 20:11:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.867 20:11:28 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.156 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:22.156 20:11:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:22.156 20:11:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:22.156 20:11:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:22.156 20:11:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.156 20:11:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.363 00:04:26.363 real 0m9.207s 00:04:26.363 user 0m2.660s 00:04:26.363 sys 0m4.790s 00:04:26.363 20:11:38 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.363 20:11:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:26.363 ************************************ 00:04:26.363 END TEST allowed 00:04:26.363 ************************************ 00:04:26.363 20:11:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:26.363 00:04:26.363 real 0m25.218s 00:04:26.363 user 0m8.294s 00:04:26.363 sys 0m14.600s 00:04:26.363 20:11:38 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.363 20:11:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:26.363 ************************************ 00:04:26.363 END TEST acl 00:04:26.363 ************************************ 00:04:26.363 20:11:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:26.363 20:11:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:26.363 20:11:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.363 20:11:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.363 20:11:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.363 ************************************ 00:04:26.363 START TEST hugepages 00:04:26.363 ************************************ 00:04:26.363 20:11:38 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:26.363 * Looking for test storage... 00:04:26.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103030920 kB' 'MemAvailable: 106356120 kB' 'Buffers: 2704 kB' 'Cached: 14639364 kB' 'SwapCached: 0 kB' 'Active: 11596648 kB' 'Inactive: 3518544 kB' 'Active(anon): 11117424 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476496 kB' 'Mapped: 229028 kB' 'Shmem: 10644300 kB' 'KReclaimable: 303528 kB' 'Slab: 1128000 kB' 'SReclaimable: 303528 kB' 'SUnreclaim: 824472 kB' 'KernelStack: 27168 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460872 kB' 'Committed_AS: 12640916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.363 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.364 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:26.365 20:11:38 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:26.366 20:11:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.366 20:11:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.366 20:11:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.366 ************************************ 00:04:26.366 START TEST default_setup 00:04:26.366 ************************************ 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.366 20:11:38 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.713 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:29.713 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:29.713 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:29.973 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105196924 kB' 'MemAvailable: 108522060 kB' 'Buffers: 2704 kB' 'Cached: 14639484 kB' 'SwapCached: 0 kB' 'Active: 11612812 kB' 'Inactive: 3518544 kB' 'Active(anon): 11133588 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491940 kB' 'Mapped: 229116 kB' 'Shmem: 10644420 kB' 'KReclaimable: 303400 kB' 'Slab: 1125684 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822284 kB' 'KernelStack: 27200 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12654568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235220 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.235 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.236 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105197204 kB' 'MemAvailable: 108522340 kB' 'Buffers: 2704 kB' 'Cached: 14639484 kB' 'SwapCached: 0 kB' 'Active: 11612620 kB' 'Inactive: 3518544 kB' 'Active(anon): 11133396 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491752 kB' 'Mapped: 229076 kB' 'Shmem: 10644420 kB' 'KReclaimable: 303400 kB' 'Slab: 1125668 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822268 kB' 'KernelStack: 27152 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12655828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235220 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.501 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.502 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105198596 kB' 'MemAvailable: 108523732 kB' 'Buffers: 2704 kB' 'Cached: 14639504 kB' 'SwapCached: 0 kB' 'Active: 11612572 kB' 'Inactive: 3518544 kB' 'Active(anon): 11133348 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492188 kB' 'Mapped: 229008 kB' 'Shmem: 10644440 kB' 'KReclaimable: 303400 kB' 'Slab: 1125664 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822264 kB' 'KernelStack: 27120 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12656016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235236 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.503 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.504 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.505 nr_hugepages=1024 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.505 resv_hugepages=0 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.505 surplus_hugepages=0 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.505 anon_hugepages=0 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105197996 kB' 'MemAvailable: 108523132 kB' 'Buffers: 2704 kB' 'Cached: 14639504 kB' 'SwapCached: 0 kB' 'Active: 11612036 kB' 'Inactive: 3518544 kB' 'Active(anon): 11132812 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491684 kB' 'Mapped: 229008 kB' 'Shmem: 10644440 kB' 'KReclaimable: 303400 kB' 'Slab: 1125656 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822256 kB' 'KernelStack: 27248 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12656040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235268 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.505 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.506 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.507 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58076420 kB' 'MemUsed: 7582588 kB' 'SwapCached: 0 kB' 'Active: 2775996 kB' 'Inactive: 223860 kB' 'Active(anon): 2536572 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 223860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2784776 kB' 'Mapped: 92240 kB' 'AnonPages: 218216 kB' 'Shmem: 2321492 kB' 'KernelStack: 14840 kB' 'PageTables: 5284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124608 kB' 'Slab: 591056 kB' 'SReclaimable: 124608 kB' 'SUnreclaim: 466448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.508 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.509 node0=1024 expecting 1024 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.509 00:04:30.509 real 0m4.037s 00:04:30.509 user 0m1.586s 00:04:30.509 sys 0m2.484s 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.509 20:11:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:30.509 ************************************ 00:04:30.509 END TEST default_setup 00:04:30.509 ************************************ 00:04:30.509 20:11:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:30.509 20:11:42 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:30.509 20:11:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.509 20:11:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.509 20:11:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.509 ************************************ 00:04:30.509 START TEST per_node_1G_alloc 00:04:30.509 ************************************ 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.509 20:11:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:33.809 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:33.809 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:33.809 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.390 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105246036 kB' 'MemAvailable: 108571172 kB' 'Buffers: 2704 kB' 'Cached: 14639644 kB' 'SwapCached: 0 kB' 'Active: 11612128 kB' 'Inactive: 3518544 kB' 'Active(anon): 11132904 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491592 kB' 'Mapped: 227940 kB' 'Shmem: 10644580 kB' 'KReclaimable: 303400 kB' 'Slab: 1125344 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 821944 kB' 'KernelStack: 27264 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12650720 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.391 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105248396 kB' 'MemAvailable: 108573532 kB' 'Buffers: 2704 kB' 'Cached: 14639648 kB' 'SwapCached: 0 kB' 'Active: 11611480 kB' 'Inactive: 3518544 kB' 'Active(anon): 11132256 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490992 kB' 'Mapped: 227940 kB' 'Shmem: 10644584 kB' 'KReclaimable: 303400 kB' 'Slab: 1125420 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822020 kB' 'KernelStack: 27264 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12650740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.392 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.393 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105249424 kB' 'MemAvailable: 108574560 kB' 'Buffers: 2704 kB' 'Cached: 14639664 kB' 'SwapCached: 0 kB' 'Active: 11611688 kB' 'Inactive: 3518544 kB' 'Active(anon): 11132464 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491188 kB' 'Mapped: 227940 kB' 'Shmem: 10644600 kB' 'KReclaimable: 303400 kB' 'Slab: 1125420 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822020 kB' 'KernelStack: 27200 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12650760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.394 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.395 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.396 nr_hugepages=1024 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.396 resv_hugepages=0 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.396 surplus_hugepages=0 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.396 anon_hugepages=0 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105249260 kB' 'MemAvailable: 108574396 kB' 'Buffers: 2704 kB' 'Cached: 14639688 kB' 'SwapCached: 0 kB' 'Active: 11611676 kB' 'Inactive: 3518544 kB' 'Active(anon): 11132452 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491156 kB' 'Mapped: 227940 kB' 'Shmem: 10644624 kB' 'KReclaimable: 303400 kB' 'Slab: 1125420 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822020 kB' 'KernelStack: 27376 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12650784 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.396 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.397 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59146336 kB' 'MemUsed: 6512672 kB' 'SwapCached: 0 kB' 'Active: 2776556 kB' 'Inactive: 223860 kB' 'Active(anon): 2537132 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 223860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2784920 kB' 'Mapped: 91312 kB' 'AnonPages: 218676 kB' 'Shmem: 2321636 kB' 'KernelStack: 14904 kB' 'PageTables: 5380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124608 kB' 'Slab: 591128 kB' 'SReclaimable: 124608 kB' 'SUnreclaim: 466520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.398 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.399 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46104012 kB' 'MemUsed: 14575824 kB' 'SwapCached: 0 kB' 'Active: 8834944 kB' 'Inactive: 3294684 kB' 'Active(anon): 8595144 kB' 'Inactive(anon): 0 kB' 'Active(file): 239800 kB' 'Inactive(file): 3294684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11857496 kB' 'Mapped: 136628 kB' 'AnonPages: 272244 kB' 'Shmem: 8323012 kB' 'KernelStack: 12344 kB' 'PageTables: 3468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178792 kB' 'Slab: 534292 kB' 'SReclaimable: 178792 kB' 'SUnreclaim: 355500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.400 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:34.401 node0=512 expecting 512 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:34.401 node1=512 expecting 512 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:34.401 00:04:34.401 real 0m3.891s 00:04:34.401 user 0m1.577s 00:04:34.401 sys 0m2.372s 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.401 20:11:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:34.401 ************************************ 00:04:34.401 END TEST per_node_1G_alloc 00:04:34.401 ************************************ 00:04:34.401 20:11:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:34.401 20:11:46 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:34.401 20:11:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.401 20:11:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.401 20:11:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.662 ************************************ 00:04:34.662 START TEST even_2G_alloc 00:04:34.662 ************************************ 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:34.662 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.663 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:34.663 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:34.663 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:34.663 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.663 20:11:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.214 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:37.214 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:37.214 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105250980 kB' 'MemAvailable: 108576116 kB' 'Buffers: 2704 kB' 'Cached: 14639828 kB' 'SwapCached: 0 kB' 'Active: 11613208 kB' 'Inactive: 3518544 kB' 'Active(anon): 11133984 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492568 kB' 'Mapped: 227992 kB' 'Shmem: 10644764 kB' 'KReclaimable: 303400 kB' 'Slab: 1125788 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822388 kB' 'KernelStack: 27344 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12649940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.476 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.477 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.745 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105253404 kB' 'MemAvailable: 108578540 kB' 'Buffers: 2704 kB' 'Cached: 14639832 kB' 'SwapCached: 0 kB' 'Active: 11612664 kB' 'Inactive: 3518544 kB' 'Active(anon): 11133440 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491932 kB' 'Mapped: 227960 kB' 'Shmem: 10644768 kB' 'KReclaimable: 303400 kB' 'Slab: 1125784 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822384 kB' 'KernelStack: 27232 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12651568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.746 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.747 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105253760 kB' 'MemAvailable: 108578896 kB' 'Buffers: 2704 kB' 'Cached: 14639848 kB' 'SwapCached: 0 kB' 'Active: 11611968 kB' 'Inactive: 3518544 kB' 'Active(anon): 11132744 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491220 kB' 'Mapped: 227960 kB' 'Shmem: 10644784 kB' 'KReclaimable: 303400 kB' 'Slab: 1125820 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822420 kB' 'KernelStack: 27136 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12649980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.748 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.749 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.750 nr_hugepages=1024 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.750 resv_hugepages=0 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.750 surplus_hugepages=0 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.750 anon_hugepages=0 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105252752 kB' 'MemAvailable: 108577888 kB' 'Buffers: 2704 kB' 'Cached: 14639848 kB' 'SwapCached: 0 kB' 'Active: 11612800 kB' 'Inactive: 3518544 kB' 'Active(anon): 11133576 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491580 kB' 'Mapped: 227960 kB' 'Shmem: 10644784 kB' 'KReclaimable: 303400 kB' 'Slab: 1125660 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822260 kB' 'KernelStack: 27184 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12651612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.750 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.751 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.752 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59166628 kB' 'MemUsed: 6492380 kB' 'SwapCached: 0 kB' 'Active: 2775668 kB' 'Inactive: 223860 kB' 'Active(anon): 2536244 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 223860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2785040 kB' 'Mapped: 91312 kB' 'AnonPages: 217700 kB' 'Shmem: 2321756 kB' 'KernelStack: 14952 kB' 'PageTables: 5532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124608 kB' 'Slab: 591508 kB' 'SReclaimable: 124608 kB' 'SUnreclaim: 466900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.753 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46083928 kB' 'MemUsed: 14595908 kB' 'SwapCached: 0 kB' 'Active: 8837572 kB' 'Inactive: 3294684 kB' 'Active(anon): 8597772 kB' 'Inactive(anon): 0 kB' 'Active(file): 239800 kB' 'Inactive(file): 3294684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11857556 kB' 'Mapped: 136648 kB' 'AnonPages: 274768 kB' 'Shmem: 8323072 kB' 'KernelStack: 12344 kB' 'PageTables: 3412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178792 kB' 'Slab: 534152 kB' 'SReclaimable: 178792 kB' 'SUnreclaim: 355360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.754 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.755 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:37.756 node0=512 expecting 512 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:37.756 node1=512 expecting 512 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:37.756 00:04:37.756 real 0m3.227s 00:04:37.756 user 0m1.113s 00:04:37.756 sys 0m2.139s 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.756 20:11:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:37.756 ************************************ 00:04:37.756 END TEST even_2G_alloc 00:04:37.756 ************************************ 00:04:37.756 20:11:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:37.756 20:11:49 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:37.756 20:11:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.756 20:11:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.756 20:11:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.756 ************************************ 00:04:37.756 START TEST odd_alloc 00:04:37.756 ************************************ 00:04:37.756 20:11:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:37.756 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:37.756 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:37.756 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:37.756 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.756 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.757 20:11:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.059 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:41.059 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:41.059 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.319 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105224332 kB' 'MemAvailable: 108549468 kB' 'Buffers: 2704 kB' 'Cached: 14640000 kB' 'SwapCached: 0 kB' 'Active: 11614608 kB' 'Inactive: 3518544 kB' 'Active(anon): 11135384 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493136 kB' 'Mapped: 228052 kB' 'Shmem: 10644936 kB' 'KReclaimable: 303400 kB' 'Slab: 1125912 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822512 kB' 'KernelStack: 27184 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12649640 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.320 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105225120 kB' 'MemAvailable: 108550256 kB' 'Buffers: 2704 kB' 'Cached: 14640004 kB' 'SwapCached: 0 kB' 'Active: 11614212 kB' 'Inactive: 3518544 kB' 'Active(anon): 11134988 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492816 kB' 'Mapped: 228052 kB' 'Shmem: 10644940 kB' 'KReclaimable: 303400 kB' 'Slab: 1125904 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822504 kB' 'KernelStack: 27168 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12649660 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105225384 kB' 'MemAvailable: 108550520 kB' 'Buffers: 2704 kB' 'Cached: 14640020 kB' 'SwapCached: 0 kB' 'Active: 11613736 kB' 'Inactive: 3518544 kB' 'Active(anon): 11134512 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492800 kB' 'Mapped: 227976 kB' 'Shmem: 10644956 kB' 'KReclaimable: 303400 kB' 'Slab: 1125876 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822476 kB' 'KernelStack: 27168 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12649680 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:41.593 nr_hugepages=1025 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.593 resv_hugepages=0 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.593 surplus_hugepages=0 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.593 anon_hugepages=0 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105225384 kB' 'MemAvailable: 108550520 kB' 'Buffers: 2704 kB' 'Cached: 14640040 kB' 'SwapCached: 0 kB' 'Active: 11613780 kB' 'Inactive: 3518544 kB' 'Active(anon): 11134556 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492796 kB' 'Mapped: 227976 kB' 'Shmem: 10644976 kB' 'KReclaimable: 303400 kB' 'Slab: 1125876 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822476 kB' 'KernelStack: 27168 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12649700 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59154040 kB' 'MemUsed: 6504968 kB' 'SwapCached: 0 kB' 'Active: 2776440 kB' 'Inactive: 223860 kB' 'Active(anon): 2537016 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 223860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2785192 kB' 'Mapped: 91304 kB' 'AnonPages: 218284 kB' 'Shmem: 2321908 kB' 'KernelStack: 14936 kB' 'PageTables: 5484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124608 kB' 'Slab: 591472 kB' 'SReclaimable: 124608 kB' 'SUnreclaim: 466864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.596 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46072404 kB' 'MemUsed: 14607432 kB' 'SwapCached: 0 kB' 'Active: 8837308 kB' 'Inactive: 3294684 kB' 'Active(anon): 8597508 kB' 'Inactive(anon): 0 kB' 'Active(file): 239800 kB' 'Inactive(file): 3294684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11857576 kB' 'Mapped: 136672 kB' 'AnonPages: 274476 kB' 'Shmem: 8323092 kB' 'KernelStack: 12216 kB' 'PageTables: 3032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178792 kB' 'Slab: 534404 kB' 'SReclaimable: 178792 kB' 'SUnreclaim: 355612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.597 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.598 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.599 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.599 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.599 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:41.599 node0=512 expecting 513 00:04:41.599 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.599 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.599 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.599 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:41.599 node1=513 expecting 512 00:04:41.599 20:11:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:41.599 00:04:41.599 real 0m3.781s 00:04:41.599 user 0m1.485s 00:04:41.599 sys 0m2.341s 00:04:41.599 20:11:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.599 20:11:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.599 ************************************ 00:04:41.599 END TEST odd_alloc 00:04:41.599 ************************************ 00:04:41.599 20:11:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:41.599 20:11:53 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:41.599 20:11:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.599 20:11:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.599 20:11:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.599 ************************************ 00:04:41.599 START TEST custom_alloc 00:04:41.599 ************************************ 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.599 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.860 20:11:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:45.163 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:45.163 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104188464 kB' 'MemAvailable: 107513600 kB' 'Buffers: 2704 kB' 'Cached: 14640176 kB' 'SwapCached: 0 kB' 'Active: 11615744 kB' 'Inactive: 3518544 kB' 'Active(anon): 11136520 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494224 kB' 'Mapped: 228116 kB' 'Shmem: 10645112 kB' 'KReclaimable: 303400 kB' 'Slab: 1125572 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822172 kB' 'KernelStack: 27184 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12650468 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.163 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.429 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.430 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104188668 kB' 'MemAvailable: 107513804 kB' 'Buffers: 2704 kB' 'Cached: 14640180 kB' 'SwapCached: 0 kB' 'Active: 11614596 kB' 'Inactive: 3518544 kB' 'Active(anon): 11135372 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493500 kB' 'Mapped: 228000 kB' 'Shmem: 10645116 kB' 'KReclaimable: 303400 kB' 'Slab: 1125572 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822172 kB' 'KernelStack: 27168 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12650488 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.431 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.432 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104189392 kB' 'MemAvailable: 107514528 kB' 'Buffers: 2704 kB' 'Cached: 14640196 kB' 'SwapCached: 0 kB' 'Active: 11614612 kB' 'Inactive: 3518544 kB' 'Active(anon): 11135388 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493504 kB' 'Mapped: 228000 kB' 'Shmem: 10645132 kB' 'KReclaimable: 303400 kB' 'Slab: 1125572 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822172 kB' 'KernelStack: 27168 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12650508 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.433 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:45.434 nr_hugepages=1536 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.434 resv_hugepages=0 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.434 surplus_hugepages=0 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.434 anon_hugepages=0 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.434 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104190108 kB' 'MemAvailable: 107515244 kB' 'Buffers: 2704 kB' 'Cached: 14640236 kB' 'SwapCached: 0 kB' 'Active: 11614292 kB' 'Inactive: 3518544 kB' 'Active(anon): 11135068 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493116 kB' 'Mapped: 228000 kB' 'Shmem: 10645172 kB' 'KReclaimable: 303400 kB' 'Slab: 1125572 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 822172 kB' 'KernelStack: 27152 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12650528 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.435 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.436 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59160360 kB' 'MemUsed: 6498648 kB' 'SwapCached: 0 kB' 'Active: 2778328 kB' 'Inactive: 223860 kB' 'Active(anon): 2538904 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 223860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2785364 kB' 'Mapped: 91304 kB' 'AnonPages: 220012 kB' 'Shmem: 2322080 kB' 'KernelStack: 14952 kB' 'PageTables: 5528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124608 kB' 'Slab: 591236 kB' 'SReclaimable: 124608 kB' 'SUnreclaim: 466628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.437 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45030584 kB' 'MemUsed: 15649252 kB' 'SwapCached: 0 kB' 'Active: 8835984 kB' 'Inactive: 3294684 kB' 'Active(anon): 8596184 kB' 'Inactive(anon): 0 kB' 'Active(file): 239800 kB' 'Inactive(file): 3294684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11857596 kB' 'Mapped: 136696 kB' 'AnonPages: 273104 kB' 'Shmem: 8323112 kB' 'KernelStack: 12200 kB' 'PageTables: 2984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178792 kB' 'Slab: 534336 kB' 'SReclaimable: 178792 kB' 'SUnreclaim: 355544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.438 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:45.439 node0=512 expecting 512 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:45.439 node1=1024 expecting 1024 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:45.439 00:04:45.439 real 0m3.790s 00:04:45.439 user 0m1.524s 00:04:45.439 sys 0m2.325s 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.439 20:11:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:45.439 ************************************ 00:04:45.439 END TEST custom_alloc 00:04:45.439 ************************************ 00:04:45.439 20:11:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:45.439 20:11:57 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:45.439 20:11:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.439 20:11:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.439 20:11:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.700 ************************************ 00:04:45.700 START TEST no_shrink_alloc 00:04:45.700 ************************************ 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.700 20:11:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.243 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:48.243 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:48.243 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:48.243 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:48.243 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:48.504 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:48.504 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:48.504 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:48.504 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:48.504 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:48.504 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:48.504 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:48.504 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:48.504 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:48.504 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:48.504 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:48.504 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105239840 kB' 'MemAvailable: 108564976 kB' 'Buffers: 2704 kB' 'Cached: 14640356 kB' 'SwapCached: 0 kB' 'Active: 11623128 kB' 'Inactive: 3518544 kB' 'Active(anon): 11143904 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502116 kB' 'Mapped: 228580 kB' 'Shmem: 10645292 kB' 'KReclaimable: 303400 kB' 'Slab: 1124636 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 821236 kB' 'KernelStack: 27264 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12662216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235352 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.770 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.771 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105239844 kB' 'MemAvailable: 108564980 kB' 'Buffers: 2704 kB' 'Cached: 14640360 kB' 'SwapCached: 0 kB' 'Active: 11623080 kB' 'Inactive: 3518544 kB' 'Active(anon): 11143856 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502120 kB' 'Mapped: 228904 kB' 'Shmem: 10645296 kB' 'KReclaimable: 303400 kB' 'Slab: 1124608 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 821208 kB' 'KernelStack: 27280 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12662232 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.772 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.773 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105236000 kB' 'MemAvailable: 108561136 kB' 'Buffers: 2704 kB' 'Cached: 14640376 kB' 'SwapCached: 0 kB' 'Active: 11620000 kB' 'Inactive: 3518544 kB' 'Active(anon): 11140776 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499008 kB' 'Mapped: 228540 kB' 'Shmem: 10645312 kB' 'KReclaimable: 303400 kB' 'Slab: 1124656 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 821256 kB' 'KernelStack: 27264 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12659208 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.774 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.775 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.776 nr_hugepages=1024 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.776 resv_hugepages=0 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.776 surplus_hugepages=0 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.776 anon_hugepages=0 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105232724 kB' 'MemAvailable: 108557860 kB' 'Buffers: 2704 kB' 'Cached: 14640396 kB' 'SwapCached: 0 kB' 'Active: 11623040 kB' 'Inactive: 3518544 kB' 'Active(anon): 11143816 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502032 kB' 'Mapped: 228952 kB' 'Shmem: 10645332 kB' 'KReclaimable: 303400 kB' 'Slab: 1124648 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 821248 kB' 'KernelStack: 27248 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12662280 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235320 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.776 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.777 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58125628 kB' 'MemUsed: 7533380 kB' 'SwapCached: 0 kB' 'Active: 2777152 kB' 'Inactive: 223860 kB' 'Active(anon): 2537728 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 223860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2785468 kB' 'Mapped: 91304 kB' 'AnonPages: 218740 kB' 'Shmem: 2322184 kB' 'KernelStack: 14936 kB' 'PageTables: 5476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124608 kB' 'Slab: 590520 kB' 'SReclaimable: 124608 kB' 'SUnreclaim: 465912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.778 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.040 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:49.041 node0=1024 expecting 1024 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.041 20:12:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.349 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:52.349 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:52.349 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:52.349 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:52.349 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.349 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.349 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105187992 kB' 'MemAvailable: 108513128 kB' 'Buffers: 2704 kB' 'Cached: 14640508 kB' 'SwapCached: 0 kB' 'Active: 11624268 kB' 'Inactive: 3518544 kB' 'Active(anon): 11145044 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502400 kB' 'Mapped: 229088 kB' 'Shmem: 10645444 kB' 'KReclaimable: 303400 kB' 'Slab: 1124804 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 821404 kB' 'KernelStack: 27264 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12661540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235592 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.350 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105188652 kB' 'MemAvailable: 108513788 kB' 'Buffers: 2704 kB' 'Cached: 14640508 kB' 'SwapCached: 0 kB' 'Active: 11624424 kB' 'Inactive: 3518544 kB' 'Active(anon): 11145200 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502524 kB' 'Mapped: 229040 kB' 'Shmem: 10645444 kB' 'KReclaimable: 303400 kB' 'Slab: 1124804 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 821404 kB' 'KernelStack: 27232 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12661556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235544 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.351 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105189516 kB' 'MemAvailable: 108514652 kB' 'Buffers: 2704 kB' 'Cached: 14640532 kB' 'SwapCached: 0 kB' 'Active: 11623852 kB' 'Inactive: 3518544 kB' 'Active(anon): 11144628 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502420 kB' 'Mapped: 228964 kB' 'Shmem: 10645468 kB' 'KReclaimable: 303400 kB' 'Slab: 1124804 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 821404 kB' 'KernelStack: 27232 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12661580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235544 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.352 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.651 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.652 nr_hugepages=1024 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.652 resv_hugepages=0 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.652 surplus_hugepages=0 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.652 anon_hugepages=0 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 105191288 kB' 'MemAvailable: 108516424 kB' 'Buffers: 2704 kB' 'Cached: 14640552 kB' 'SwapCached: 0 kB' 'Active: 11623948 kB' 'Inactive: 3518544 kB' 'Active(anon): 11144724 kB' 'Inactive(anon): 0 kB' 'Active(file): 479224 kB' 'Inactive(file): 3518544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502532 kB' 'Mapped: 228964 kB' 'Shmem: 10645488 kB' 'KReclaimable: 303400 kB' 'Slab: 1124804 kB' 'SReclaimable: 303400 kB' 'SUnreclaim: 821404 kB' 'KernelStack: 27248 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12661600 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235528 kB' 'VmallocChunk: 0 kB' 'Percpu: 122688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3974516 kB' 'DirectMap2M: 30308352 kB' 'DirectMap1G: 101711872 kB' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.652 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.653 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58120844 kB' 'MemUsed: 7538164 kB' 'SwapCached: 0 kB' 'Active: 2777816 kB' 'Inactive: 223860 kB' 'Active(anon): 2538392 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 223860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2785588 kB' 'Mapped: 91456 kB' 'AnonPages: 219360 kB' 'Shmem: 2322304 kB' 'KernelStack: 14984 kB' 'PageTables: 5624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124608 kB' 'Slab: 590820 kB' 'SReclaimable: 124608 kB' 'SUnreclaim: 466212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.654 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.655 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.655 node0=1024 expecting 1024 00:04:52.656 20:12:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.656 00:04:52.656 real 0m6.977s 00:04:52.656 user 0m2.658s 00:04:52.656 sys 0m4.358s 00:04:52.656 20:12:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.656 20:12:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.656 ************************************ 00:04:52.656 END TEST no_shrink_alloc 00:04:52.656 ************************************ 00:04:52.656 20:12:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:52.656 20:12:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:52.656 00:04:52.656 real 0m26.330s 00:04:52.656 user 0m10.199s 00:04:52.656 sys 0m16.429s 00:04:52.656 20:12:04 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.656 20:12:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.656 ************************************ 00:04:52.656 END TEST hugepages 00:04:52.656 ************************************ 00:04:52.656 20:12:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:52.656 20:12:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:52.656 20:12:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.656 20:12:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.656 20:12:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.656 ************************************ 00:04:52.656 START TEST driver 00:04:52.656 ************************************ 00:04:52.656 20:12:04 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:52.656 * Looking for test storage... 00:04:52.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:52.656 20:12:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:52.656 20:12:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.656 20:12:04 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:57.944 20:12:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:57.944 20:12:09 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.944 20:12:09 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.944 20:12:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:57.944 ************************************ 00:04:57.944 START TEST guess_driver 00:04:57.944 ************************************ 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:57.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:57.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:57.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:57.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:57.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:57.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:57.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:57.944 Looking for driver=vfio-pci 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.944 20:12:09 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.247 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.508 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:01.508 20:12:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:01.508 20:12:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.508 20:12:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.713 00:05:05.713 real 0m8.146s 00:05:05.713 user 0m2.554s 00:05:05.713 sys 0m4.709s 00:05:05.713 20:12:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.713 20:12:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:05.713 ************************************ 00:05:05.713 END TEST guess_driver 00:05:05.713 ************************************ 00:05:05.974 20:12:17 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:05.974 00:05:05.974 real 0m13.204s 00:05:05.974 user 0m4.076s 00:05:05.974 sys 0m7.438s 00:05:05.974 20:12:17 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.974 20:12:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:05.974 ************************************ 00:05:05.974 END TEST driver 00:05:05.974 ************************************ 00:05:05.974 20:12:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:05.974 20:12:17 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:05.974 20:12:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.974 20:12:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.974 20:12:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:05.974 ************************************ 00:05:05.974 START TEST devices 00:05:05.974 ************************************ 00:05:05.974 20:12:17 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:05.974 * Looking for test storage... 00:05:05.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:05.974 20:12:17 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:05.974 20:12:17 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:05.974 20:12:17 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.974 20:12:17 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:10.182 20:12:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:10.182 20:12:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:10.182 20:12:21 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:10.182 20:12:21 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.182 20:12:21 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:10.182 20:12:21 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:10.182 20:12:21 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.182 20:12:21 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:10.182 20:12:21 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:10.182 20:12:21 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:10.182 No valid GPT data, bailing 00:05:10.182 20:12:21 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:10.182 20:12:21 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:10.182 20:12:21 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:10.182 20:12:21 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:10.182 20:12:21 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:10.182 20:12:21 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:10.182 20:12:21 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:10.182 20:12:21 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.182 20:12:21 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.182 20:12:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:10.182 ************************************ 00:05:10.182 START TEST nvme_mount 00:05:10.182 ************************************ 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:10.182 20:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:11.125 Creating new GPT entries in memory. 00:05:11.125 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:11.125 other utilities. 00:05:11.125 20:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:11.125 20:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.125 20:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:11.125 20:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:11.125 20:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:12.067 Creating new GPT entries in memory. 00:05:12.067 The operation has completed successfully. 00:05:12.067 20:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:12.067 20:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.067 20:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3359102 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.328 20:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.631 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:15.892 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.892 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.892 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.154 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:16.154 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:16.154 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:16.154 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.154 20:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:19.457 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.718 20:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.019 20:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.280 20:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.280 20:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.280 20:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:23.280 20:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:23.280 20:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.280 20:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.280 20:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.280 20:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.280 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.280 00:05:23.280 real 0m13.251s 00:05:23.280 user 0m4.131s 00:05:23.280 sys 0m7.010s 00:05:23.280 20:12:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.280 20:12:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:23.280 ************************************ 00:05:23.280 END TEST nvme_mount 00:05:23.280 ************************************ 00:05:23.541 20:12:35 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:23.541 20:12:35 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:23.541 20:12:35 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.541 20:12:35 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.541 20:12:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:23.541 ************************************ 00:05:23.541 START TEST dm_mount 00:05:23.541 ************************************ 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.541 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:23.542 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:23.542 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.542 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:23.542 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:23.542 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.542 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:23.542 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:23.542 20:12:35 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:24.485 Creating new GPT entries in memory. 00:05:24.485 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:24.485 other utilities. 00:05:24.485 20:12:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:24.485 20:12:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.485 20:12:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.485 20:12:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.485 20:12:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:25.490 Creating new GPT entries in memory. 00:05:25.490 The operation has completed successfully. 00:05:25.490 20:12:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:25.490 20:12:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.490 20:12:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:25.490 20:12:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:25.490 20:12:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:26.434 The operation has completed successfully. 00:05:26.434 20:12:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:26.434 20:12:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.434 20:12:38 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3364207 00:05:26.434 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:26.434 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.434 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:26.434 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.695 20:12:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:30.001 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.001 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.001 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.001 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.001 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.002 20:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.263 20:12:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.566 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.566 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.566 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.566 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.566 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.566 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.566 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.566 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.566 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.566 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.566 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.567 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:33.828 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.828 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.828 20:12:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:33.828 00:05:33.828 real 0m10.246s 00:05:33.828 user 0m2.668s 00:05:33.828 sys 0m4.607s 00:05:33.828 20:12:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.828 20:12:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:33.828 ************************************ 00:05:33.828 END TEST dm_mount 00:05:33.828 ************************************ 00:05:33.828 20:12:45 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:33.828 20:12:45 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:33.828 20:12:45 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:33.828 20:12:45 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.828 20:12:45 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.828 20:12:45 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:33.828 20:12:45 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.828 20:12:45 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:34.088 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:34.088 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:34.088 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:34.088 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:34.088 20:12:45 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:34.088 20:12:45 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.088 20:12:45 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:34.088 20:12:45 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.088 20:12:45 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:34.088 20:12:45 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.088 20:12:45 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:34.088 00:05:34.088 real 0m28.085s 00:05:34.088 user 0m8.410s 00:05:34.088 sys 0m14.465s 00:05:34.088 20:12:45 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.088 20:12:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:34.088 ************************************ 00:05:34.088 END TEST devices 00:05:34.088 ************************************ 00:05:34.088 20:12:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:34.088 00:05:34.088 real 1m33.256s 00:05:34.088 user 0m31.151s 00:05:34.088 sys 0m53.204s 00:05:34.088 20:12:45 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.088 20:12:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:34.088 ************************************ 00:05:34.088 END TEST setup.sh 00:05:34.088 ************************************ 00:05:34.088 20:12:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.088 20:12:45 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:37.585 Hugepages 00:05:37.585 node hugesize free / total 00:05:37.585 node0 1048576kB 0 / 0 00:05:37.585 node0 2048kB 2048 / 2048 00:05:37.585 node1 1048576kB 0 / 0 00:05:37.585 node1 2048kB 0 / 0 00:05:37.585 00:05:37.585 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:37.585 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:37.585 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:37.585 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:37.585 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:37.585 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:37.585 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:37.585 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:37.585 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:37.585 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:37.585 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:37.585 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:37.585 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:37.585 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:37.585 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:37.585 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:37.585 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:37.585 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:37.585 20:12:49 -- spdk/autotest.sh@130 -- # uname -s 00:05:37.585 20:12:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:37.585 20:12:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:37.585 20:12:49 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:40.887 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:40.887 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:41.148 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:43.060 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:43.060 20:12:54 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:44.002 20:12:55 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:44.002 20:12:55 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:44.002 20:12:55 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:44.002 20:12:55 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:44.002 20:12:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:44.002 20:12:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:44.002 20:12:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.002 20:12:55 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:44.002 20:12:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:44.261 20:12:56 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:44.261 20:12:56 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:44.261 20:12:56 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:47.560 Waiting for block devices as requested 00:05:47.560 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:47.560 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:47.560 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:47.560 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:47.560 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:47.560 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:47.560 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:47.820 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:47.820 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:48.081 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:48.081 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:48.081 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:48.341 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:48.341 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:48.341 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:48.341 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:48.601 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:48.861 20:13:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:48.861 20:13:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:48.861 20:13:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:48.861 20:13:00 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:48.861 20:13:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:48.861 20:13:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:48.861 20:13:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:48.861 20:13:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:48.861 20:13:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:48.861 20:13:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:48.861 20:13:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:48.861 20:13:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:48.861 20:13:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:48.861 20:13:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:48.861 20:13:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:48.861 20:13:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:48.861 20:13:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:48.861 20:13:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:48.861 20:13:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:48.861 20:13:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:48.861 20:13:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:48.861 20:13:00 -- common/autotest_common.sh@1557 -- # continue 00:05:48.861 20:13:00 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:48.861 20:13:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.861 20:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:48.861 20:13:00 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:48.861 20:13:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.861 20:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:48.861 20:13:00 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:52.166 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:52.166 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:52.427 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:52.427 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:52.427 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:52.427 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:52.688 20:13:04 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:52.688 20:13:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.688 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:52.688 20:13:04 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:52.688 20:13:04 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:52.688 20:13:04 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:52.688 20:13:04 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:52.688 20:13:04 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:52.688 20:13:04 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:52.688 20:13:04 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:52.688 20:13:04 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:52.688 20:13:04 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:52.688 20:13:04 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:52.688 20:13:04 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:52.688 20:13:04 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:52.689 20:13:04 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:52.689 20:13:04 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:52.689 20:13:04 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:52.689 20:13:04 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:52.689 20:13:04 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:52.689 20:13:04 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:52.689 20:13:04 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:52.689 20:13:04 -- common/autotest_common.sh@1593 -- # return 0 00:05:52.689 20:13:04 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:52.689 20:13:04 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:52.689 20:13:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:52.689 20:13:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:52.689 20:13:04 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:52.689 20:13:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:52.689 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:52.689 20:13:04 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:52.689 20:13:04 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:52.689 20:13:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.689 20:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.689 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:52.949 ************************************ 00:05:52.949 START TEST env 00:05:52.949 ************************************ 00:05:52.949 20:13:04 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:52.949 * Looking for test storage... 00:05:52.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:52.949 20:13:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:52.949 20:13:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.949 20:13:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.949 20:13:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.949 ************************************ 00:05:52.949 START TEST env_memory 00:05:52.949 ************************************ 00:05:52.949 20:13:04 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:52.949 00:05:52.949 00:05:52.949 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.949 http://cunit.sourceforge.net/ 00:05:52.949 00:05:52.949 00:05:52.949 Suite: memory 00:05:52.949 Test: alloc and free memory map ...[2024-07-22 20:13:04.950892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:53.210 passed 00:05:53.210 Test: mem map translation ...[2024-07-22 20:13:04.993218] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:53.210 [2024-07-22 20:13:04.993267] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:53.210 [2024-07-22 20:13:04.993331] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:53.210 [2024-07-22 20:13:04.993351] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:53.210 passed 00:05:53.210 Test: mem map registration ...[2024-07-22 20:13:05.067212] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:53.210 [2024-07-22 20:13:05.067252] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:53.210 passed 00:05:53.210 Test: mem map adjacent registrations ...passed 00:05:53.210 00:05:53.210 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.210 suites 1 1 n/a 0 0 00:05:53.210 tests 4 4 4 0 0 00:05:53.210 asserts 152 152 152 0 n/a 00:05:53.210 00:05:53.210 Elapsed time = 0.259 seconds 00:05:53.210 00:05:53.210 real 0m0.298s 00:05:53.210 user 0m0.274s 00:05:53.210 sys 0m0.023s 00:05:53.210 20:13:05 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.210 20:13:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:53.210 ************************************ 00:05:53.210 END TEST env_memory 00:05:53.210 ************************************ 00:05:53.210 20:13:05 env -- common/autotest_common.sh@1142 -- # return 0 00:05:53.210 20:13:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:53.210 20:13:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.210 20:13:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.210 20:13:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.473 ************************************ 00:05:53.473 START TEST env_vtophys 00:05:53.473 ************************************ 00:05:53.473 20:13:05 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:53.473 EAL: lib.eal log level changed from notice to debug 00:05:53.473 EAL: Detected lcore 0 as core 0 on socket 0 00:05:53.473 EAL: Detected lcore 1 as core 1 on socket 0 00:05:53.473 EAL: Detected lcore 2 as core 2 on socket 0 00:05:53.473 EAL: Detected lcore 3 as core 3 on socket 0 00:05:53.473 EAL: Detected lcore 4 as core 4 on socket 0 00:05:53.473 EAL: Detected lcore 5 as core 5 on socket 0 00:05:53.473 EAL: Detected lcore 6 as core 6 on socket 0 00:05:53.473 EAL: Detected lcore 7 as core 7 on socket 0 00:05:53.473 EAL: Detected lcore 8 as core 8 on socket 0 00:05:53.473 EAL: Detected lcore 9 as core 9 on socket 0 00:05:53.473 EAL: Detected lcore 10 as core 10 on socket 0 00:05:53.473 EAL: Detected lcore 11 as core 11 on socket 0 00:05:53.473 EAL: Detected lcore 12 as core 12 on socket 0 00:05:53.473 EAL: Detected lcore 13 as core 13 on socket 0 00:05:53.473 EAL: Detected lcore 14 as core 14 on socket 0 00:05:53.473 EAL: Detected lcore 15 as core 15 on socket 0 00:05:53.473 EAL: Detected lcore 16 as core 16 on socket 0 00:05:53.473 EAL: Detected lcore 17 as core 17 on socket 0 00:05:53.473 EAL: Detected lcore 18 as core 18 on socket 0 00:05:53.473 EAL: Detected lcore 19 as core 19 on socket 0 00:05:53.473 EAL: Detected lcore 20 as core 20 on socket 0 00:05:53.473 EAL: Detected lcore 21 as core 21 on socket 0 00:05:53.473 EAL: Detected lcore 22 as core 22 on socket 0 00:05:53.473 EAL: Detected lcore 23 as core 23 on socket 0 00:05:53.473 EAL: Detected lcore 24 as core 24 on socket 0 00:05:53.473 EAL: Detected lcore 25 as core 25 on socket 0 00:05:53.473 EAL: Detected lcore 26 as core 26 on socket 0 00:05:53.473 EAL: Detected lcore 27 as core 27 on socket 0 00:05:53.473 EAL: Detected lcore 28 as core 28 on socket 0 00:05:53.473 EAL: Detected lcore 29 as core 29 on socket 0 00:05:53.473 EAL: Detected lcore 30 as core 30 on socket 0 00:05:53.473 EAL: Detected lcore 31 as core 31 on socket 0 00:05:53.473 EAL: Detected lcore 32 as core 32 on socket 0 00:05:53.473 EAL: Detected lcore 33 as core 33 on socket 0 00:05:53.473 EAL: Detected lcore 34 as core 34 on socket 0 00:05:53.473 EAL: Detected lcore 35 as core 35 on socket 0 00:05:53.473 EAL: Detected lcore 36 as core 0 on socket 1 00:05:53.473 EAL: Detected lcore 37 as core 1 on socket 1 00:05:53.473 EAL: Detected lcore 38 as core 2 on socket 1 00:05:53.473 EAL: Detected lcore 39 as core 3 on socket 1 00:05:53.473 EAL: Detected lcore 40 as core 4 on socket 1 00:05:53.473 EAL: Detected lcore 41 as core 5 on socket 1 00:05:53.473 EAL: Detected lcore 42 as core 6 on socket 1 00:05:53.473 EAL: Detected lcore 43 as core 7 on socket 1 00:05:53.473 EAL: Detected lcore 44 as core 8 on socket 1 00:05:53.473 EAL: Detected lcore 45 as core 9 on socket 1 00:05:53.473 EAL: Detected lcore 46 as core 10 on socket 1 00:05:53.473 EAL: Detected lcore 47 as core 11 on socket 1 00:05:53.473 EAL: Detected lcore 48 as core 12 on socket 1 00:05:53.473 EAL: Detected lcore 49 as core 13 on socket 1 00:05:53.473 EAL: Detected lcore 50 as core 14 on socket 1 00:05:53.473 EAL: Detected lcore 51 as core 15 on socket 1 00:05:53.473 EAL: Detected lcore 52 as core 16 on socket 1 00:05:53.473 EAL: Detected lcore 53 as core 17 on socket 1 00:05:53.473 EAL: Detected lcore 54 as core 18 on socket 1 00:05:53.473 EAL: Detected lcore 55 as core 19 on socket 1 00:05:53.473 EAL: Detected lcore 56 as core 20 on socket 1 00:05:53.473 EAL: Detected lcore 57 as core 21 on socket 1 00:05:53.473 EAL: Detected lcore 58 as core 22 on socket 1 00:05:53.473 EAL: Detected lcore 59 as core 23 on socket 1 00:05:53.473 EAL: Detected lcore 60 as core 24 on socket 1 00:05:53.473 EAL: Detected lcore 61 as core 25 on socket 1 00:05:53.473 EAL: Detected lcore 62 as core 26 on socket 1 00:05:53.473 EAL: Detected lcore 63 as core 27 on socket 1 00:05:53.473 EAL: Detected lcore 64 as core 28 on socket 1 00:05:53.473 EAL: Detected lcore 65 as core 29 on socket 1 00:05:53.473 EAL: Detected lcore 66 as core 30 on socket 1 00:05:53.473 EAL: Detected lcore 67 as core 31 on socket 1 00:05:53.473 EAL: Detected lcore 68 as core 32 on socket 1 00:05:53.473 EAL: Detected lcore 69 as core 33 on socket 1 00:05:53.473 EAL: Detected lcore 70 as core 34 on socket 1 00:05:53.473 EAL: Detected lcore 71 as core 35 on socket 1 00:05:53.473 EAL: Detected lcore 72 as core 0 on socket 0 00:05:53.473 EAL: Detected lcore 73 as core 1 on socket 0 00:05:53.473 EAL: Detected lcore 74 as core 2 on socket 0 00:05:53.473 EAL: Detected lcore 75 as core 3 on socket 0 00:05:53.473 EAL: Detected lcore 76 as core 4 on socket 0 00:05:53.473 EAL: Detected lcore 77 as core 5 on socket 0 00:05:53.473 EAL: Detected lcore 78 as core 6 on socket 0 00:05:53.473 EAL: Detected lcore 79 as core 7 on socket 0 00:05:53.473 EAL: Detected lcore 80 as core 8 on socket 0 00:05:53.473 EAL: Detected lcore 81 as core 9 on socket 0 00:05:53.473 EAL: Detected lcore 82 as core 10 on socket 0 00:05:53.473 EAL: Detected lcore 83 as core 11 on socket 0 00:05:53.473 EAL: Detected lcore 84 as core 12 on socket 0 00:05:53.473 EAL: Detected lcore 85 as core 13 on socket 0 00:05:53.473 EAL: Detected lcore 86 as core 14 on socket 0 00:05:53.473 EAL: Detected lcore 87 as core 15 on socket 0 00:05:53.473 EAL: Detected lcore 88 as core 16 on socket 0 00:05:53.473 EAL: Detected lcore 89 as core 17 on socket 0 00:05:53.473 EAL: Detected lcore 90 as core 18 on socket 0 00:05:53.473 EAL: Detected lcore 91 as core 19 on socket 0 00:05:53.473 EAL: Detected lcore 92 as core 20 on socket 0 00:05:53.473 EAL: Detected lcore 93 as core 21 on socket 0 00:05:53.473 EAL: Detected lcore 94 as core 22 on socket 0 00:05:53.473 EAL: Detected lcore 95 as core 23 on socket 0 00:05:53.473 EAL: Detected lcore 96 as core 24 on socket 0 00:05:53.473 EAL: Detected lcore 97 as core 25 on socket 0 00:05:53.473 EAL: Detected lcore 98 as core 26 on socket 0 00:05:53.473 EAL: Detected lcore 99 as core 27 on socket 0 00:05:53.473 EAL: Detected lcore 100 as core 28 on socket 0 00:05:53.473 EAL: Detected lcore 101 as core 29 on socket 0 00:05:53.473 EAL: Detected lcore 102 as core 30 on socket 0 00:05:53.473 EAL: Detected lcore 103 as core 31 on socket 0 00:05:53.473 EAL: Detected lcore 104 as core 32 on socket 0 00:05:53.473 EAL: Detected lcore 105 as core 33 on socket 0 00:05:53.473 EAL: Detected lcore 106 as core 34 on socket 0 00:05:53.473 EAL: Detected lcore 107 as core 35 on socket 0 00:05:53.473 EAL: Detected lcore 108 as core 0 on socket 1 00:05:53.473 EAL: Detected lcore 109 as core 1 on socket 1 00:05:53.473 EAL: Detected lcore 110 as core 2 on socket 1 00:05:53.473 EAL: Detected lcore 111 as core 3 on socket 1 00:05:53.473 EAL: Detected lcore 112 as core 4 on socket 1 00:05:53.473 EAL: Detected lcore 113 as core 5 on socket 1 00:05:53.473 EAL: Detected lcore 114 as core 6 on socket 1 00:05:53.473 EAL: Detected lcore 115 as core 7 on socket 1 00:05:53.473 EAL: Detected lcore 116 as core 8 on socket 1 00:05:53.473 EAL: Detected lcore 117 as core 9 on socket 1 00:05:53.473 EAL: Detected lcore 118 as core 10 on socket 1 00:05:53.473 EAL: Detected lcore 119 as core 11 on socket 1 00:05:53.473 EAL: Detected lcore 120 as core 12 on socket 1 00:05:53.473 EAL: Detected lcore 121 as core 13 on socket 1 00:05:53.473 EAL: Detected lcore 122 as core 14 on socket 1 00:05:53.473 EAL: Detected lcore 123 as core 15 on socket 1 00:05:53.473 EAL: Detected lcore 124 as core 16 on socket 1 00:05:53.473 EAL: Detected lcore 125 as core 17 on socket 1 00:05:53.473 EAL: Detected lcore 126 as core 18 on socket 1 00:05:53.473 EAL: Detected lcore 127 as core 19 on socket 1 00:05:53.473 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:53.473 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:53.473 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:53.473 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:53.473 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:53.473 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:53.473 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:53.473 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:53.473 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:53.473 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:53.473 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:53.473 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:53.473 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:53.473 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:53.473 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:53.473 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:53.473 EAL: Maximum logical cores by configuration: 128 00:05:53.473 EAL: Detected CPU lcores: 128 00:05:53.473 EAL: Detected NUMA nodes: 2 00:05:53.473 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:53.473 EAL: Detected shared linkage of DPDK 00:05:53.473 EAL: No shared files mode enabled, IPC will be disabled 00:05:53.473 EAL: Bus pci wants IOVA as 'DC' 00:05:53.473 EAL: Buses did not request a specific IOVA mode. 00:05:53.473 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:53.473 EAL: Selected IOVA mode 'VA' 00:05:53.473 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.473 EAL: Probing VFIO support... 00:05:53.473 EAL: IOMMU type 1 (Type 1) is supported 00:05:53.473 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:53.473 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:53.473 EAL: VFIO support initialized 00:05:53.473 EAL: Ask a virtual area of 0x2e000 bytes 00:05:53.473 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:53.473 EAL: Setting up physically contiguous memory... 00:05:53.473 EAL: Setting maximum number of open files to 524288 00:05:53.473 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:53.473 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:53.473 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:53.473 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.473 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:53.473 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.473 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.473 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:53.473 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:53.473 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.473 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:53.473 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.473 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.473 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:53.473 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:53.473 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.473 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:53.473 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.473 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.473 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:53.473 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:53.473 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.473 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:53.473 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.473 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.473 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:53.473 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:53.473 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:53.473 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.473 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:53.473 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:53.473 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.473 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:53.473 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:53.473 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.473 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:53.473 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:53.473 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.473 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:53.473 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:53.473 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.473 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:53.473 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:53.473 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.473 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:53.473 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:53.473 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.473 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:53.473 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:53.473 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.473 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:53.473 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:53.473 EAL: Hugepages will be freed exactly as allocated. 00:05:53.473 EAL: No shared files mode enabled, IPC is disabled 00:05:53.473 EAL: No shared files mode enabled, IPC is disabled 00:05:53.473 EAL: TSC frequency is ~2400000 KHz 00:05:53.473 EAL: Main lcore 0 is ready (tid=7f0bb0325a40;cpuset=[0]) 00:05:53.473 EAL: Trying to obtain current memory policy. 00:05:53.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.473 EAL: Restoring previous memory policy: 0 00:05:53.473 EAL: request: mp_malloc_sync 00:05:53.473 EAL: No shared files mode enabled, IPC is disabled 00:05:53.473 EAL: Heap on socket 0 was expanded by 2MB 00:05:53.473 EAL: No shared files mode enabled, IPC is disabled 00:05:53.473 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:53.473 EAL: Mem event callback 'spdk:(nil)' registered 00:05:53.473 00:05:53.473 00:05:53.473 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.473 http://cunit.sourceforge.net/ 00:05:53.473 00:05:53.473 00:05:53.473 Suite: components_suite 00:05:53.756 Test: vtophys_malloc_test ...passed 00:05:53.757 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:53.757 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.757 EAL: Restoring previous memory policy: 4 00:05:53.757 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.757 EAL: request: mp_malloc_sync 00:05:53.757 EAL: No shared files mode enabled, IPC is disabled 00:05:53.757 EAL: Heap on socket 0 was expanded by 4MB 00:05:53.757 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.757 EAL: request: mp_malloc_sync 00:05:53.757 EAL: No shared files mode enabled, IPC is disabled 00:05:53.757 EAL: Heap on socket 0 was shrunk by 4MB 00:05:53.757 EAL: Trying to obtain current memory policy. 00:05:53.757 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.757 EAL: Restoring previous memory policy: 4 00:05:53.757 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.757 EAL: request: mp_malloc_sync 00:05:53.757 EAL: No shared files mode enabled, IPC is disabled 00:05:53.757 EAL: Heap on socket 0 was expanded by 6MB 00:05:53.757 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.757 EAL: request: mp_malloc_sync 00:05:53.757 EAL: No shared files mode enabled, IPC is disabled 00:05:53.757 EAL: Heap on socket 0 was shrunk by 6MB 00:05:53.757 EAL: Trying to obtain current memory policy. 00:05:53.757 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.757 EAL: Restoring previous memory policy: 4 00:05:53.757 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.757 EAL: request: mp_malloc_sync 00:05:53.757 EAL: No shared files mode enabled, IPC is disabled 00:05:53.757 EAL: Heap on socket 0 was expanded by 10MB 00:05:53.757 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.757 EAL: request: mp_malloc_sync 00:05:53.757 EAL: No shared files mode enabled, IPC is disabled 00:05:53.757 EAL: Heap on socket 0 was shrunk by 10MB 00:05:54.073 EAL: Trying to obtain current memory policy. 00:05:54.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.073 EAL: Restoring previous memory policy: 4 00:05:54.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.073 EAL: request: mp_malloc_sync 00:05:54.073 EAL: No shared files mode enabled, IPC is disabled 00:05:54.073 EAL: Heap on socket 0 was expanded by 18MB 00:05:54.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.073 EAL: request: mp_malloc_sync 00:05:54.073 EAL: No shared files mode enabled, IPC is disabled 00:05:54.073 EAL: Heap on socket 0 was shrunk by 18MB 00:05:54.073 EAL: Trying to obtain current memory policy. 00:05:54.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.073 EAL: Restoring previous memory policy: 4 00:05:54.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.073 EAL: request: mp_malloc_sync 00:05:54.073 EAL: No shared files mode enabled, IPC is disabled 00:05:54.073 EAL: Heap on socket 0 was expanded by 34MB 00:05:54.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.073 EAL: request: mp_malloc_sync 00:05:54.073 EAL: No shared files mode enabled, IPC is disabled 00:05:54.073 EAL: Heap on socket 0 was shrunk by 34MB 00:05:54.073 EAL: Trying to obtain current memory policy. 00:05:54.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.073 EAL: Restoring previous memory policy: 4 00:05:54.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.073 EAL: request: mp_malloc_sync 00:05:54.073 EAL: No shared files mode enabled, IPC is disabled 00:05:54.073 EAL: Heap on socket 0 was expanded by 66MB 00:05:54.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.073 EAL: request: mp_malloc_sync 00:05:54.073 EAL: No shared files mode enabled, IPC is disabled 00:05:54.073 EAL: Heap on socket 0 was shrunk by 66MB 00:05:54.073 EAL: Trying to obtain current memory policy. 00:05:54.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.073 EAL: Restoring previous memory policy: 4 00:05:54.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.073 EAL: request: mp_malloc_sync 00:05:54.073 EAL: No shared files mode enabled, IPC is disabled 00:05:54.073 EAL: Heap on socket 0 was expanded by 130MB 00:05:54.334 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.334 EAL: request: mp_malloc_sync 00:05:54.334 EAL: No shared files mode enabled, IPC is disabled 00:05:54.334 EAL: Heap on socket 0 was shrunk by 130MB 00:05:54.594 EAL: Trying to obtain current memory policy. 00:05:54.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.594 EAL: Restoring previous memory policy: 4 00:05:54.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.594 EAL: request: mp_malloc_sync 00:05:54.594 EAL: No shared files mode enabled, IPC is disabled 00:05:54.594 EAL: Heap on socket 0 was expanded by 258MB 00:05:54.855 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.855 EAL: request: mp_malloc_sync 00:05:54.855 EAL: No shared files mode enabled, IPC is disabled 00:05:54.855 EAL: Heap on socket 0 was shrunk by 258MB 00:05:55.116 EAL: Trying to obtain current memory policy. 00:05:55.116 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.116 EAL: Restoring previous memory policy: 4 00:05:55.116 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.116 EAL: request: mp_malloc_sync 00:05:55.116 EAL: No shared files mode enabled, IPC is disabled 00:05:55.116 EAL: Heap on socket 0 was expanded by 514MB 00:05:55.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.948 EAL: request: mp_malloc_sync 00:05:55.948 EAL: No shared files mode enabled, IPC is disabled 00:05:55.948 EAL: Heap on socket 0 was shrunk by 514MB 00:05:56.521 EAL: Trying to obtain current memory policy. 00:05:56.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.521 EAL: Restoring previous memory policy: 4 00:05:56.521 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.521 EAL: request: mp_malloc_sync 00:05:56.521 EAL: No shared files mode enabled, IPC is disabled 00:05:56.521 EAL: Heap on socket 0 was expanded by 1026MB 00:05:57.906 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.906 EAL: request: mp_malloc_sync 00:05:57.906 EAL: No shared files mode enabled, IPC is disabled 00:05:57.906 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:58.848 passed 00:05:58.848 00:05:58.848 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.848 suites 1 1 n/a 0 0 00:05:58.848 tests 2 2 2 0 0 00:05:58.848 asserts 497 497 497 0 n/a 00:05:58.848 00:05:58.848 Elapsed time = 5.385 seconds 00:05:58.848 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.848 EAL: request: mp_malloc_sync 00:05:58.848 EAL: No shared files mode enabled, IPC is disabled 00:05:58.848 EAL: Heap on socket 0 was shrunk by 2MB 00:05:59.108 EAL: No shared files mode enabled, IPC is disabled 00:05:59.108 EAL: No shared files mode enabled, IPC is disabled 00:05:59.108 EAL: No shared files mode enabled, IPC is disabled 00:05:59.108 00:05:59.108 real 0m5.634s 00:05:59.108 user 0m4.889s 00:05:59.108 sys 0m0.697s 00:05:59.108 20:13:10 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.108 20:13:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:59.108 ************************************ 00:05:59.108 END TEST env_vtophys 00:05:59.108 ************************************ 00:05:59.108 20:13:10 env -- common/autotest_common.sh@1142 -- # return 0 00:05:59.108 20:13:10 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:59.108 20:13:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.108 20:13:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.108 20:13:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.108 ************************************ 00:05:59.108 START TEST env_pci 00:05:59.108 ************************************ 00:05:59.108 20:13:10 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:59.108 00:05:59.108 00:05:59.108 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.108 http://cunit.sourceforge.net/ 00:05:59.108 00:05:59.108 00:05:59.108 Suite: pci 00:05:59.108 Test: pci_hook ...[2024-07-22 20:13:11.016074] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3376276 has claimed it 00:05:59.108 EAL: Cannot find device (10000:00:01.0) 00:05:59.108 EAL: Failed to attach device on primary process 00:05:59.108 passed 00:05:59.108 00:05:59.108 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.108 suites 1 1 n/a 0 0 00:05:59.108 tests 1 1 1 0 0 00:05:59.108 asserts 25 25 25 0 n/a 00:05:59.108 00:05:59.108 Elapsed time = 0.050 seconds 00:05:59.108 00:05:59.109 real 0m0.130s 00:05:59.109 user 0m0.055s 00:05:59.109 sys 0m0.074s 00:05:59.109 20:13:11 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.109 20:13:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:59.109 ************************************ 00:05:59.109 END TEST env_pci 00:05:59.109 ************************************ 00:05:59.369 20:13:11 env -- common/autotest_common.sh@1142 -- # return 0 00:05:59.369 20:13:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:59.369 20:13:11 env -- env/env.sh@15 -- # uname 00:05:59.369 20:13:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:59.369 20:13:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:59.369 20:13:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.369 20:13:11 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:59.369 20:13:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.369 20:13:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.369 ************************************ 00:05:59.369 START TEST env_dpdk_post_init 00:05:59.369 ************************************ 00:05:59.369 20:13:11 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.369 EAL: Detected CPU lcores: 128 00:05:59.369 EAL: Detected NUMA nodes: 2 00:05:59.369 EAL: Detected shared linkage of DPDK 00:05:59.369 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.369 EAL: Selected IOVA mode 'VA' 00:05:59.369 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.369 EAL: VFIO support initialized 00:05:59.369 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.629 EAL: Using IOMMU type 1 (Type 1) 00:05:59.629 EAL: Ignore mapping IO port bar(1) 00:05:59.629 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:59.890 EAL: Ignore mapping IO port bar(1) 00:05:59.890 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:00.151 EAL: Ignore mapping IO port bar(1) 00:06:00.151 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:00.412 EAL: Ignore mapping IO port bar(1) 00:06:00.412 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:00.412 EAL: Ignore mapping IO port bar(1) 00:06:00.672 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:00.672 EAL: Ignore mapping IO port bar(1) 00:06:00.932 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:00.932 EAL: Ignore mapping IO port bar(1) 00:06:01.192 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:01.192 EAL: Ignore mapping IO port bar(1) 00:06:01.192 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:01.452 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:01.713 EAL: Ignore mapping IO port bar(1) 00:06:01.713 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:01.974 EAL: Ignore mapping IO port bar(1) 00:06:01.974 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:01.974 EAL: Ignore mapping IO port bar(1) 00:06:02.234 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:02.234 EAL: Ignore mapping IO port bar(1) 00:06:02.495 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:02.495 EAL: Ignore mapping IO port bar(1) 00:06:02.755 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:02.755 EAL: Ignore mapping IO port bar(1) 00:06:02.755 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:03.016 EAL: Ignore mapping IO port bar(1) 00:06:03.016 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:03.276 EAL: Ignore mapping IO port bar(1) 00:06:03.276 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:03.276 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:03.276 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:03.536 Starting DPDK initialization... 00:06:03.536 Starting SPDK post initialization... 00:06:03.536 SPDK NVMe probe 00:06:03.536 Attaching to 0000:65:00.0 00:06:03.536 Attached to 0000:65:00.0 00:06:03.536 Cleaning up... 00:06:05.449 00:06:05.449 real 0m5.827s 00:06:05.449 user 0m0.244s 00:06:05.449 sys 0m0.132s 00:06:05.449 20:13:17 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.449 20:13:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.449 ************************************ 00:06:05.449 END TEST env_dpdk_post_init 00:06:05.449 ************************************ 00:06:05.449 20:13:17 env -- common/autotest_common.sh@1142 -- # return 0 00:06:05.449 20:13:17 env -- env/env.sh@26 -- # uname 00:06:05.449 20:13:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:05.449 20:13:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.449 20:13:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.449 20:13:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.449 20:13:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.449 ************************************ 00:06:05.449 START TEST env_mem_callbacks 00:06:05.449 ************************************ 00:06:05.449 20:13:17 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.449 EAL: Detected CPU lcores: 128 00:06:05.449 EAL: Detected NUMA nodes: 2 00:06:05.449 EAL: Detected shared linkage of DPDK 00:06:05.449 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.449 EAL: Selected IOVA mode 'VA' 00:06:05.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.449 EAL: VFIO support initialized 00:06:05.449 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:05.449 00:06:05.449 00:06:05.449 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.449 http://cunit.sourceforge.net/ 00:06:05.449 00:06:05.449 00:06:05.449 Suite: memory 00:06:05.449 Test: test ... 00:06:05.449 register 0x200000200000 2097152 00:06:05.449 malloc 3145728 00:06:05.449 register 0x200000400000 4194304 00:06:05.449 buf 0x2000004fffc0 len 3145728 PASSED 00:06:05.449 malloc 64 00:06:05.449 buf 0x2000004ffec0 len 64 PASSED 00:06:05.449 malloc 4194304 00:06:05.449 register 0x200000800000 6291456 00:06:05.449 buf 0x2000009fffc0 len 4194304 PASSED 00:06:05.449 free 0x2000004fffc0 3145728 00:06:05.449 free 0x2000004ffec0 64 00:06:05.449 unregister 0x200000400000 4194304 PASSED 00:06:05.449 free 0x2000009fffc0 4194304 00:06:05.449 unregister 0x200000800000 6291456 PASSED 00:06:05.449 malloc 8388608 00:06:05.449 register 0x200000400000 10485760 00:06:05.449 buf 0x2000005fffc0 len 8388608 PASSED 00:06:05.449 free 0x2000005fffc0 8388608 00:06:05.449 unregister 0x200000400000 10485760 PASSED 00:06:05.449 passed 00:06:05.449 00:06:05.449 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.449 suites 1 1 n/a 0 0 00:06:05.449 tests 1 1 1 0 0 00:06:05.449 asserts 15 15 15 0 n/a 00:06:05.449 00:06:05.450 Elapsed time = 0.043 seconds 00:06:05.450 00:06:05.450 real 0m0.155s 00:06:05.450 user 0m0.082s 00:06:05.450 sys 0m0.072s 00:06:05.450 20:13:17 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.450 20:13:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:05.450 ************************************ 00:06:05.450 END TEST env_mem_callbacks 00:06:05.450 ************************************ 00:06:05.450 20:13:17 env -- common/autotest_common.sh@1142 -- # return 0 00:06:05.450 00:06:05.450 real 0m12.546s 00:06:05.450 user 0m5.740s 00:06:05.450 sys 0m1.328s 00:06:05.450 20:13:17 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.450 20:13:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.450 ************************************ 00:06:05.450 END TEST env 00:06:05.450 ************************************ 00:06:05.450 20:13:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.450 20:13:17 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:05.450 20:13:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.450 20:13:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.450 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:06:05.450 ************************************ 00:06:05.450 START TEST rpc 00:06:05.450 ************************************ 00:06:05.450 20:13:17 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:05.450 * Looking for test storage... 00:06:05.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.450 20:13:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3377725 00:06:05.450 20:13:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.450 20:13:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3377725 00:06:05.450 20:13:17 rpc -- common/autotest_common.sh@829 -- # '[' -z 3377725 ']' 00:06:05.450 20:13:17 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.450 20:13:17 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.450 20:13:17 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.450 20:13:17 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.450 20:13:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.450 20:13:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:05.711 [2024-07-22 20:13:17.557720] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:05.711 [2024-07-22 20:13:17.557833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377725 ] 00:06:05.711 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.711 [2024-07-22 20:13:17.680985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.971 [2024-07-22 20:13:17.860085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:05.971 [2024-07-22 20:13:17.860136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3377725' to capture a snapshot of events at runtime. 00:06:05.971 [2024-07-22 20:13:17.860148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.971 [2024-07-22 20:13:17.860159] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.971 [2024-07-22 20:13:17.860167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3377725 for offline analysis/debug. 00:06:05.971 [2024-07-22 20:13:17.860197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.543 20:13:18 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.543 20:13:18 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:06.544 20:13:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:06.544 20:13:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:06.544 20:13:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:06.544 20:13:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:06.544 20:13:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.544 20:13:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.544 20:13:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.544 ************************************ 00:06:06.544 START TEST rpc_integrity 00:06:06.544 ************************************ 00:06:06.544 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:06.544 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:06.544 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.544 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.544 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.544 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:06.544 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:06.544 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:06.544 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:06.544 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.544 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.544 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.544 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:06.544 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:06.544 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.544 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.544 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.544 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:06.544 { 00:06:06.544 "name": "Malloc0", 00:06:06.544 "aliases": [ 00:06:06.544 "c8ee1964-58ce-4bcc-af8a-e025f3c7d9cb" 00:06:06.544 ], 00:06:06.544 "product_name": "Malloc disk", 00:06:06.544 "block_size": 512, 00:06:06.544 "num_blocks": 16384, 00:06:06.544 "uuid": "c8ee1964-58ce-4bcc-af8a-e025f3c7d9cb", 00:06:06.544 "assigned_rate_limits": { 00:06:06.544 "rw_ios_per_sec": 0, 00:06:06.544 "rw_mbytes_per_sec": 0, 00:06:06.544 "r_mbytes_per_sec": 0, 00:06:06.544 "w_mbytes_per_sec": 0 00:06:06.544 }, 00:06:06.544 "claimed": false, 00:06:06.544 "zoned": false, 00:06:06.544 "supported_io_types": { 00:06:06.544 "read": true, 00:06:06.544 "write": true, 00:06:06.544 "unmap": true, 00:06:06.544 "flush": true, 00:06:06.544 "reset": true, 00:06:06.544 "nvme_admin": false, 00:06:06.544 "nvme_io": false, 00:06:06.544 "nvme_io_md": false, 00:06:06.544 "write_zeroes": true, 00:06:06.544 "zcopy": true, 00:06:06.544 "get_zone_info": false, 00:06:06.544 "zone_management": false, 00:06:06.544 "zone_append": false, 00:06:06.544 "compare": false, 00:06:06.544 "compare_and_write": false, 00:06:06.544 "abort": true, 00:06:06.544 "seek_hole": false, 00:06:06.544 "seek_data": false, 00:06:06.544 "copy": true, 00:06:06.544 "nvme_iov_md": false 00:06:06.544 }, 00:06:06.544 "memory_domains": [ 00:06:06.544 { 00:06:06.544 "dma_device_id": "system", 00:06:06.544 "dma_device_type": 1 00:06:06.544 }, 00:06:06.544 { 00:06:06.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.544 "dma_device_type": 2 00:06:06.544 } 00:06:06.544 ], 00:06:06.544 "driver_specific": {} 00:06:06.544 } 00:06:06.544 ]' 00:06:06.544 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:06.805 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:06.805 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:06.805 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.805 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.805 [2024-07-22 20:13:18.602382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:06.805 [2024-07-22 20:13:18.602437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:06.805 [2024-07-22 20:13:18.602461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001ce80 00:06:06.805 [2024-07-22 20:13:18.602475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:06.805 [2024-07-22 20:13:18.604624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:06.805 [2024-07-22 20:13:18.604654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:06.805 Passthru0 00:06:06.805 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.805 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:06.805 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.805 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.805 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.805 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:06.805 { 00:06:06.805 "name": "Malloc0", 00:06:06.805 "aliases": [ 00:06:06.805 "c8ee1964-58ce-4bcc-af8a-e025f3c7d9cb" 00:06:06.805 ], 00:06:06.805 "product_name": "Malloc disk", 00:06:06.805 "block_size": 512, 00:06:06.805 "num_blocks": 16384, 00:06:06.805 "uuid": "c8ee1964-58ce-4bcc-af8a-e025f3c7d9cb", 00:06:06.805 "assigned_rate_limits": { 00:06:06.805 "rw_ios_per_sec": 0, 00:06:06.805 "rw_mbytes_per_sec": 0, 00:06:06.805 "r_mbytes_per_sec": 0, 00:06:06.805 "w_mbytes_per_sec": 0 00:06:06.805 }, 00:06:06.805 "claimed": true, 00:06:06.805 "claim_type": "exclusive_write", 00:06:06.805 "zoned": false, 00:06:06.805 "supported_io_types": { 00:06:06.805 "read": true, 00:06:06.805 "write": true, 00:06:06.805 "unmap": true, 00:06:06.805 "flush": true, 00:06:06.805 "reset": true, 00:06:06.806 "nvme_admin": false, 00:06:06.806 "nvme_io": false, 00:06:06.806 "nvme_io_md": false, 00:06:06.806 "write_zeroes": true, 00:06:06.806 "zcopy": true, 00:06:06.806 "get_zone_info": false, 00:06:06.806 "zone_management": false, 00:06:06.806 "zone_append": false, 00:06:06.806 "compare": false, 00:06:06.806 "compare_and_write": false, 00:06:06.806 "abort": true, 00:06:06.806 "seek_hole": false, 00:06:06.806 "seek_data": false, 00:06:06.806 "copy": true, 00:06:06.806 "nvme_iov_md": false 00:06:06.806 }, 00:06:06.806 "memory_domains": [ 00:06:06.806 { 00:06:06.806 "dma_device_id": "system", 00:06:06.806 "dma_device_type": 1 00:06:06.806 }, 00:06:06.806 { 00:06:06.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.806 "dma_device_type": 2 00:06:06.806 } 00:06:06.806 ], 00:06:06.806 "driver_specific": {} 00:06:06.806 }, 00:06:06.806 { 00:06:06.806 "name": "Passthru0", 00:06:06.806 "aliases": [ 00:06:06.806 "cf130a68-f2dd-59ea-824c-e387090e7e4b" 00:06:06.806 ], 00:06:06.806 "product_name": "passthru", 00:06:06.806 "block_size": 512, 00:06:06.806 "num_blocks": 16384, 00:06:06.806 "uuid": "cf130a68-f2dd-59ea-824c-e387090e7e4b", 00:06:06.806 "assigned_rate_limits": { 00:06:06.806 "rw_ios_per_sec": 0, 00:06:06.806 "rw_mbytes_per_sec": 0, 00:06:06.806 "r_mbytes_per_sec": 0, 00:06:06.806 "w_mbytes_per_sec": 0 00:06:06.806 }, 00:06:06.806 "claimed": false, 00:06:06.806 "zoned": false, 00:06:06.806 "supported_io_types": { 00:06:06.806 "read": true, 00:06:06.806 "write": true, 00:06:06.806 "unmap": true, 00:06:06.806 "flush": true, 00:06:06.806 "reset": true, 00:06:06.806 "nvme_admin": false, 00:06:06.806 "nvme_io": false, 00:06:06.806 "nvme_io_md": false, 00:06:06.806 "write_zeroes": true, 00:06:06.806 "zcopy": true, 00:06:06.806 "get_zone_info": false, 00:06:06.806 "zone_management": false, 00:06:06.806 "zone_append": false, 00:06:06.806 "compare": false, 00:06:06.806 "compare_and_write": false, 00:06:06.806 "abort": true, 00:06:06.806 "seek_hole": false, 00:06:06.806 "seek_data": false, 00:06:06.806 "copy": true, 00:06:06.806 "nvme_iov_md": false 00:06:06.806 }, 00:06:06.806 "memory_domains": [ 00:06:06.806 { 00:06:06.806 "dma_device_id": "system", 00:06:06.806 "dma_device_type": 1 00:06:06.806 }, 00:06:06.806 { 00:06:06.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.806 "dma_device_type": 2 00:06:06.806 } 00:06:06.806 ], 00:06:06.806 "driver_specific": { 00:06:06.806 "passthru": { 00:06:06.806 "name": "Passthru0", 00:06:06.806 "base_bdev_name": "Malloc0" 00:06:06.806 } 00:06:06.806 } 00:06:06.806 } 00:06:06.806 ]' 00:06:06.806 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:06.806 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:06.806 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:06.806 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.806 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.806 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.806 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:06.806 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.806 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.806 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.806 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:06.806 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.806 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.806 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.806 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:06.806 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:06.806 20:13:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.806 00:06:06.806 real 0m0.291s 00:06:06.806 user 0m0.180s 00:06:06.806 sys 0m0.026s 00:06:06.806 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.806 20:13:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.806 ************************************ 00:06:06.806 END TEST rpc_integrity 00:06:06.806 ************************************ 00:06:06.806 20:13:18 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:06.806 20:13:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:06.806 20:13:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.806 20:13:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.806 20:13:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.806 ************************************ 00:06:06.806 START TEST rpc_plugins 00:06:06.806 ************************************ 00:06:06.806 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:07.067 20:13:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.067 20:13:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:07.067 20:13:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.067 20:13:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:07.067 { 00:06:07.067 "name": "Malloc1", 00:06:07.067 "aliases": [ 00:06:07.067 "3354ae4b-dedb-47ce-9757-5883a9578eaa" 00:06:07.067 ], 00:06:07.067 "product_name": "Malloc disk", 00:06:07.067 "block_size": 4096, 00:06:07.067 "num_blocks": 256, 00:06:07.067 "uuid": "3354ae4b-dedb-47ce-9757-5883a9578eaa", 00:06:07.067 "assigned_rate_limits": { 00:06:07.067 "rw_ios_per_sec": 0, 00:06:07.067 "rw_mbytes_per_sec": 0, 00:06:07.067 "r_mbytes_per_sec": 0, 00:06:07.067 "w_mbytes_per_sec": 0 00:06:07.067 }, 00:06:07.067 "claimed": false, 00:06:07.067 "zoned": false, 00:06:07.067 "supported_io_types": { 00:06:07.067 "read": true, 00:06:07.067 "write": true, 00:06:07.067 "unmap": true, 00:06:07.067 "flush": true, 00:06:07.067 "reset": true, 00:06:07.067 "nvme_admin": false, 00:06:07.067 "nvme_io": false, 00:06:07.067 "nvme_io_md": false, 00:06:07.067 "write_zeroes": true, 00:06:07.067 "zcopy": true, 00:06:07.067 "get_zone_info": false, 00:06:07.067 "zone_management": false, 00:06:07.067 "zone_append": false, 00:06:07.067 "compare": false, 00:06:07.067 "compare_and_write": false, 00:06:07.067 "abort": true, 00:06:07.067 "seek_hole": false, 00:06:07.067 "seek_data": false, 00:06:07.067 "copy": true, 00:06:07.067 "nvme_iov_md": false 00:06:07.067 }, 00:06:07.067 "memory_domains": [ 00:06:07.067 { 00:06:07.067 "dma_device_id": "system", 00:06:07.067 "dma_device_type": 1 00:06:07.067 }, 00:06:07.067 { 00:06:07.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.067 "dma_device_type": 2 00:06:07.067 } 00:06:07.067 ], 00:06:07.067 "driver_specific": {} 00:06:07.067 } 00:06:07.067 ]' 00:06:07.067 20:13:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:07.067 20:13:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:07.067 20:13:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.067 20:13:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.067 20:13:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:07.067 20:13:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:07.067 20:13:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:07.067 00:06:07.067 real 0m0.143s 00:06:07.067 user 0m0.087s 00:06:07.067 sys 0m0.021s 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.067 20:13:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.067 ************************************ 00:06:07.067 END TEST rpc_plugins 00:06:07.067 ************************************ 00:06:07.067 20:13:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:07.067 20:13:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:07.067 20:13:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.067 20:13:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.067 20:13:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.067 ************************************ 00:06:07.067 START TEST rpc_trace_cmd_test 00:06:07.067 ************************************ 00:06:07.067 20:13:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:07.067 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:07.067 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:07.067 20:13:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.067 20:13:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:07.068 20:13:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.068 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:07.068 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3377725", 00:06:07.068 "tpoint_group_mask": "0x8", 00:06:07.068 "iscsi_conn": { 00:06:07.068 "mask": "0x2", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "scsi": { 00:06:07.068 "mask": "0x4", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "bdev": { 00:06:07.068 "mask": "0x8", 00:06:07.068 "tpoint_mask": "0xffffffffffffffff" 00:06:07.068 }, 00:06:07.068 "nvmf_rdma": { 00:06:07.068 "mask": "0x10", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "nvmf_tcp": { 00:06:07.068 "mask": "0x20", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "ftl": { 00:06:07.068 "mask": "0x40", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "blobfs": { 00:06:07.068 "mask": "0x80", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "dsa": { 00:06:07.068 "mask": "0x200", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "thread": { 00:06:07.068 "mask": "0x400", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "nvme_pcie": { 00:06:07.068 "mask": "0x800", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "iaa": { 00:06:07.068 "mask": "0x1000", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "nvme_tcp": { 00:06:07.068 "mask": "0x2000", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "bdev_nvme": { 00:06:07.068 "mask": "0x4000", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 }, 00:06:07.068 "sock": { 00:06:07.068 "mask": "0x8000", 00:06:07.068 "tpoint_mask": "0x0" 00:06:07.068 } 00:06:07.068 }' 00:06:07.068 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:07.328 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:07.328 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:07.328 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:07.328 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:07.328 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:07.328 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:07.328 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:07.328 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:07.328 20:13:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:07.328 00:06:07.328 real 0m0.201s 00:06:07.328 user 0m0.170s 00:06:07.328 sys 0m0.022s 00:06:07.328 20:13:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.328 20:13:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:07.328 ************************************ 00:06:07.328 END TEST rpc_trace_cmd_test 00:06:07.328 ************************************ 00:06:07.328 20:13:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:07.328 20:13:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:07.328 20:13:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:07.328 20:13:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:07.328 20:13:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.328 20:13:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.328 20:13:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.328 ************************************ 00:06:07.328 START TEST rpc_daemon_integrity 00:06:07.328 ************************************ 00:06:07.328 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:07.328 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:07.328 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.328 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.328 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.328 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:07.328 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:07.590 { 00:06:07.590 "name": "Malloc2", 00:06:07.590 "aliases": [ 00:06:07.590 "bccf5d31-b475-4214-8b5c-234c45ba27c5" 00:06:07.590 ], 00:06:07.590 "product_name": "Malloc disk", 00:06:07.590 "block_size": 512, 00:06:07.590 "num_blocks": 16384, 00:06:07.590 "uuid": "bccf5d31-b475-4214-8b5c-234c45ba27c5", 00:06:07.590 "assigned_rate_limits": { 00:06:07.590 "rw_ios_per_sec": 0, 00:06:07.590 "rw_mbytes_per_sec": 0, 00:06:07.590 "r_mbytes_per_sec": 0, 00:06:07.590 "w_mbytes_per_sec": 0 00:06:07.590 }, 00:06:07.590 "claimed": false, 00:06:07.590 "zoned": false, 00:06:07.590 "supported_io_types": { 00:06:07.590 "read": true, 00:06:07.590 "write": true, 00:06:07.590 "unmap": true, 00:06:07.590 "flush": true, 00:06:07.590 "reset": true, 00:06:07.590 "nvme_admin": false, 00:06:07.590 "nvme_io": false, 00:06:07.590 "nvme_io_md": false, 00:06:07.590 "write_zeroes": true, 00:06:07.590 "zcopy": true, 00:06:07.590 "get_zone_info": false, 00:06:07.590 "zone_management": false, 00:06:07.590 "zone_append": false, 00:06:07.590 "compare": false, 00:06:07.590 "compare_and_write": false, 00:06:07.590 "abort": true, 00:06:07.590 "seek_hole": false, 00:06:07.590 "seek_data": false, 00:06:07.590 "copy": true, 00:06:07.590 "nvme_iov_md": false 00:06:07.590 }, 00:06:07.590 "memory_domains": [ 00:06:07.590 { 00:06:07.590 "dma_device_id": "system", 00:06:07.590 "dma_device_type": 1 00:06:07.590 }, 00:06:07.590 { 00:06:07.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.590 "dma_device_type": 2 00:06:07.590 } 00:06:07.590 ], 00:06:07.590 "driver_specific": {} 00:06:07.590 } 00:06:07.590 ]' 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.590 [2024-07-22 20:13:19.452443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:07.590 [2024-07-22 20:13:19.452492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.590 [2024-07-22 20:13:19.452513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001e080 00:06:07.590 [2024-07-22 20:13:19.452526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.590 [2024-07-22 20:13:19.454598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.590 [2024-07-22 20:13:19.454627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:07.590 Passthru0 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.590 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:07.590 { 00:06:07.590 "name": "Malloc2", 00:06:07.590 "aliases": [ 00:06:07.590 "bccf5d31-b475-4214-8b5c-234c45ba27c5" 00:06:07.590 ], 00:06:07.590 "product_name": "Malloc disk", 00:06:07.590 "block_size": 512, 00:06:07.590 "num_blocks": 16384, 00:06:07.590 "uuid": "bccf5d31-b475-4214-8b5c-234c45ba27c5", 00:06:07.590 "assigned_rate_limits": { 00:06:07.590 "rw_ios_per_sec": 0, 00:06:07.590 "rw_mbytes_per_sec": 0, 00:06:07.590 "r_mbytes_per_sec": 0, 00:06:07.590 "w_mbytes_per_sec": 0 00:06:07.590 }, 00:06:07.590 "claimed": true, 00:06:07.590 "claim_type": "exclusive_write", 00:06:07.590 "zoned": false, 00:06:07.590 "supported_io_types": { 00:06:07.590 "read": true, 00:06:07.590 "write": true, 00:06:07.590 "unmap": true, 00:06:07.590 "flush": true, 00:06:07.590 "reset": true, 00:06:07.590 "nvme_admin": false, 00:06:07.590 "nvme_io": false, 00:06:07.590 "nvme_io_md": false, 00:06:07.590 "write_zeroes": true, 00:06:07.590 "zcopy": true, 00:06:07.590 "get_zone_info": false, 00:06:07.590 "zone_management": false, 00:06:07.590 "zone_append": false, 00:06:07.590 "compare": false, 00:06:07.590 "compare_and_write": false, 00:06:07.590 "abort": true, 00:06:07.590 "seek_hole": false, 00:06:07.590 "seek_data": false, 00:06:07.590 "copy": true, 00:06:07.591 "nvme_iov_md": false 00:06:07.591 }, 00:06:07.591 "memory_domains": [ 00:06:07.591 { 00:06:07.591 "dma_device_id": "system", 00:06:07.591 "dma_device_type": 1 00:06:07.591 }, 00:06:07.591 { 00:06:07.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.591 "dma_device_type": 2 00:06:07.591 } 00:06:07.591 ], 00:06:07.591 "driver_specific": {} 00:06:07.591 }, 00:06:07.591 { 00:06:07.591 "name": "Passthru0", 00:06:07.591 "aliases": [ 00:06:07.591 "49cec514-6cdc-52d2-a4b7-0f339f2f6e3f" 00:06:07.591 ], 00:06:07.591 "product_name": "passthru", 00:06:07.591 "block_size": 512, 00:06:07.591 "num_blocks": 16384, 00:06:07.591 "uuid": "49cec514-6cdc-52d2-a4b7-0f339f2f6e3f", 00:06:07.591 "assigned_rate_limits": { 00:06:07.591 "rw_ios_per_sec": 0, 00:06:07.591 "rw_mbytes_per_sec": 0, 00:06:07.591 "r_mbytes_per_sec": 0, 00:06:07.591 "w_mbytes_per_sec": 0 00:06:07.591 }, 00:06:07.591 "claimed": false, 00:06:07.591 "zoned": false, 00:06:07.591 "supported_io_types": { 00:06:07.591 "read": true, 00:06:07.591 "write": true, 00:06:07.591 "unmap": true, 00:06:07.591 "flush": true, 00:06:07.591 "reset": true, 00:06:07.591 "nvme_admin": false, 00:06:07.591 "nvme_io": false, 00:06:07.591 "nvme_io_md": false, 00:06:07.591 "write_zeroes": true, 00:06:07.591 "zcopy": true, 00:06:07.591 "get_zone_info": false, 00:06:07.591 "zone_management": false, 00:06:07.591 "zone_append": false, 00:06:07.591 "compare": false, 00:06:07.591 "compare_and_write": false, 00:06:07.591 "abort": true, 00:06:07.591 "seek_hole": false, 00:06:07.591 "seek_data": false, 00:06:07.591 "copy": true, 00:06:07.591 "nvme_iov_md": false 00:06:07.591 }, 00:06:07.591 "memory_domains": [ 00:06:07.591 { 00:06:07.591 "dma_device_id": "system", 00:06:07.591 "dma_device_type": 1 00:06:07.591 }, 00:06:07.591 { 00:06:07.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.591 "dma_device_type": 2 00:06:07.591 } 00:06:07.591 ], 00:06:07.591 "driver_specific": { 00:06:07.591 "passthru": { 00:06:07.591 "name": "Passthru0", 00:06:07.591 "base_bdev_name": "Malloc2" 00:06:07.591 } 00:06:07.591 } 00:06:07.591 } 00:06:07.591 ]' 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:07.591 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:07.852 20:13:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:07.852 00:06:07.852 real 0m0.306s 00:06:07.852 user 0m0.187s 00:06:07.852 sys 0m0.033s 00:06:07.852 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.852 20:13:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.852 ************************************ 00:06:07.852 END TEST rpc_daemon_integrity 00:06:07.852 ************************************ 00:06:07.852 20:13:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:07.852 20:13:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:07.852 20:13:19 rpc -- rpc/rpc.sh@84 -- # killprocess 3377725 00:06:07.852 20:13:19 rpc -- common/autotest_common.sh@948 -- # '[' -z 3377725 ']' 00:06:07.852 20:13:19 rpc -- common/autotest_common.sh@952 -- # kill -0 3377725 00:06:07.852 20:13:19 rpc -- common/autotest_common.sh@953 -- # uname 00:06:07.852 20:13:19 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.852 20:13:19 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3377725 00:06:07.852 20:13:19 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.852 20:13:19 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.852 20:13:19 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3377725' 00:06:07.852 killing process with pid 3377725 00:06:07.852 20:13:19 rpc -- common/autotest_common.sh@967 -- # kill 3377725 00:06:07.852 20:13:19 rpc -- common/autotest_common.sh@972 -- # wait 3377725 00:06:09.766 00:06:09.766 real 0m3.950s 00:06:09.766 user 0m4.541s 00:06:09.766 sys 0m0.770s 00:06:09.766 20:13:21 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.766 20:13:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.766 ************************************ 00:06:09.766 END TEST rpc 00:06:09.766 ************************************ 00:06:09.766 20:13:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:09.766 20:13:21 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:09.766 20:13:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.766 20:13:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.766 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:06:09.766 ************************************ 00:06:09.766 START TEST skip_rpc 00:06:09.766 ************************************ 00:06:09.766 20:13:21 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:09.766 * Looking for test storage... 00:06:09.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:09.766 20:13:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:09.766 20:13:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:09.766 20:13:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:09.766 20:13:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.766 20:13:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.766 20:13:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.766 ************************************ 00:06:09.766 START TEST skip_rpc 00:06:09.766 ************************************ 00:06:09.766 20:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:09.766 20:13:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3378582 00:06:09.766 20:13:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.766 20:13:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:09.766 20:13:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:09.766 [2024-07-22 20:13:21.636734] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:09.766 [2024-07-22 20:13:21.636850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378582 ] 00:06:09.766 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.766 [2024-07-22 20:13:21.765771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.027 [2024-07-22 20:13:21.945926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3378582 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3378582 ']' 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3378582 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3378582 00:06:15.314 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.315 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.315 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3378582' 00:06:15.315 killing process with pid 3378582 00:06:15.315 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3378582 00:06:15.315 20:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3378582 00:06:16.254 00:06:16.254 real 0m6.686s 00:06:16.254 user 0m6.338s 00:06:16.254 sys 0m0.379s 00:06:16.254 20:13:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.254 20:13:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.254 ************************************ 00:06:16.254 END TEST skip_rpc 00:06:16.254 ************************************ 00:06:16.254 20:13:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:16.254 20:13:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:16.254 20:13:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.254 20:13:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.254 20:13:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.514 ************************************ 00:06:16.514 START TEST skip_rpc_with_json 00:06:16.514 ************************************ 00:06:16.514 20:13:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:16.514 20:13:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:16.514 20:13:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3379952 00:06:16.514 20:13:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.514 20:13:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3379952 00:06:16.514 20:13:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.514 20:13:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3379952 ']' 00:06:16.515 20:13:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.515 20:13:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.515 20:13:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.515 20:13:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.515 20:13:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.515 [2024-07-22 20:13:28.397902] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:16.515 [2024-07-22 20:13:28.398020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379952 ] 00:06:16.515 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.515 [2024-07-22 20:13:28.519699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.775 [2024-07-22 20:13:28.698690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.398 [2024-07-22 20:13:29.287406] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:17.398 request: 00:06:17.398 { 00:06:17.398 "trtype": "tcp", 00:06:17.398 "method": "nvmf_get_transports", 00:06:17.398 "req_id": 1 00:06:17.398 } 00:06:17.398 Got JSON-RPC error response 00:06:17.398 response: 00:06:17.398 { 00:06:17.398 "code": -19, 00:06:17.398 "message": "No such device" 00:06:17.398 } 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.398 [2024-07-22 20:13:29.299529] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.398 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.658 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.658 20:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:17.658 { 00:06:17.658 "subsystems": [ 00:06:17.658 { 00:06:17.658 "subsystem": "keyring", 00:06:17.658 "config": [] 00:06:17.658 }, 00:06:17.658 { 00:06:17.658 "subsystem": "iobuf", 00:06:17.658 "config": [ 00:06:17.658 { 00:06:17.658 "method": "iobuf_set_options", 00:06:17.658 "params": { 00:06:17.658 "small_pool_count": 8192, 00:06:17.658 "large_pool_count": 1024, 00:06:17.658 "small_bufsize": 8192, 00:06:17.658 "large_bufsize": 135168 00:06:17.658 } 00:06:17.658 } 00:06:17.658 ] 00:06:17.658 }, 00:06:17.658 { 00:06:17.658 "subsystem": "sock", 00:06:17.658 "config": [ 00:06:17.658 { 00:06:17.658 "method": "sock_set_default_impl", 00:06:17.658 "params": { 00:06:17.658 "impl_name": "posix" 00:06:17.658 } 00:06:17.658 }, 00:06:17.658 { 00:06:17.658 "method": "sock_impl_set_options", 00:06:17.658 "params": { 00:06:17.658 "impl_name": "ssl", 00:06:17.658 "recv_buf_size": 4096, 00:06:17.658 "send_buf_size": 4096, 00:06:17.658 "enable_recv_pipe": true, 00:06:17.658 "enable_quickack": false, 00:06:17.658 "enable_placement_id": 0, 00:06:17.658 "enable_zerocopy_send_server": true, 00:06:17.658 "enable_zerocopy_send_client": false, 00:06:17.658 "zerocopy_threshold": 0, 00:06:17.658 "tls_version": 0, 00:06:17.658 "enable_ktls": false 00:06:17.658 } 00:06:17.658 }, 00:06:17.658 { 00:06:17.658 "method": "sock_impl_set_options", 00:06:17.658 "params": { 00:06:17.658 "impl_name": "posix", 00:06:17.658 "recv_buf_size": 2097152, 00:06:17.658 "send_buf_size": 2097152, 00:06:17.658 "enable_recv_pipe": true, 00:06:17.658 "enable_quickack": false, 00:06:17.658 "enable_placement_id": 0, 00:06:17.658 "enable_zerocopy_send_server": true, 00:06:17.658 "enable_zerocopy_send_client": false, 00:06:17.658 "zerocopy_threshold": 0, 00:06:17.658 "tls_version": 0, 00:06:17.658 "enable_ktls": false 00:06:17.658 } 00:06:17.658 } 00:06:17.658 ] 00:06:17.658 }, 00:06:17.658 { 00:06:17.658 "subsystem": "vmd", 00:06:17.658 "config": [] 00:06:17.658 }, 00:06:17.658 { 00:06:17.658 "subsystem": "accel", 00:06:17.658 "config": [ 00:06:17.658 { 00:06:17.658 "method": "accel_set_options", 00:06:17.658 "params": { 00:06:17.658 "small_cache_size": 128, 00:06:17.658 "large_cache_size": 16, 00:06:17.658 "task_count": 2048, 00:06:17.658 "sequence_count": 2048, 00:06:17.658 "buf_count": 2048 00:06:17.658 } 00:06:17.658 } 00:06:17.658 ] 00:06:17.658 }, 00:06:17.658 { 00:06:17.658 "subsystem": "bdev", 00:06:17.658 "config": [ 00:06:17.658 { 00:06:17.658 "method": "bdev_set_options", 00:06:17.658 "params": { 00:06:17.658 "bdev_io_pool_size": 65535, 00:06:17.658 "bdev_io_cache_size": 256, 00:06:17.658 "bdev_auto_examine": true, 00:06:17.658 "iobuf_small_cache_size": 128, 00:06:17.658 "iobuf_large_cache_size": 16 00:06:17.658 } 00:06:17.658 }, 00:06:17.658 { 00:06:17.658 "method": "bdev_raid_set_options", 00:06:17.658 "params": { 00:06:17.658 "process_window_size_kb": 1024, 00:06:17.658 "process_max_bandwidth_mb_sec": 0 00:06:17.658 } 00:06:17.658 }, 00:06:17.658 { 00:06:17.658 "method": "bdev_iscsi_set_options", 00:06:17.659 "params": { 00:06:17.659 "timeout_sec": 30 00:06:17.659 } 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "method": "bdev_nvme_set_options", 00:06:17.659 "params": { 00:06:17.659 "action_on_timeout": "none", 00:06:17.659 "timeout_us": 0, 00:06:17.659 "timeout_admin_us": 0, 00:06:17.659 "keep_alive_timeout_ms": 10000, 00:06:17.659 "arbitration_burst": 0, 00:06:17.659 "low_priority_weight": 0, 00:06:17.659 "medium_priority_weight": 0, 00:06:17.659 "high_priority_weight": 0, 00:06:17.659 "nvme_adminq_poll_period_us": 10000, 00:06:17.659 "nvme_ioq_poll_period_us": 0, 00:06:17.659 "io_queue_requests": 0, 00:06:17.659 "delay_cmd_submit": true, 00:06:17.659 "transport_retry_count": 4, 00:06:17.659 "bdev_retry_count": 3, 00:06:17.659 "transport_ack_timeout": 0, 00:06:17.659 "ctrlr_loss_timeout_sec": 0, 00:06:17.659 "reconnect_delay_sec": 0, 00:06:17.659 "fast_io_fail_timeout_sec": 0, 00:06:17.659 "disable_auto_failback": false, 00:06:17.659 "generate_uuids": false, 00:06:17.659 "transport_tos": 0, 00:06:17.659 "nvme_error_stat": false, 00:06:17.659 "rdma_srq_size": 0, 00:06:17.659 "io_path_stat": false, 00:06:17.659 "allow_accel_sequence": false, 00:06:17.659 "rdma_max_cq_size": 0, 00:06:17.659 "rdma_cm_event_timeout_ms": 0, 00:06:17.659 "dhchap_digests": [ 00:06:17.659 "sha256", 00:06:17.659 "sha384", 00:06:17.659 "sha512" 00:06:17.659 ], 00:06:17.659 "dhchap_dhgroups": [ 00:06:17.659 "null", 00:06:17.659 "ffdhe2048", 00:06:17.659 "ffdhe3072", 00:06:17.659 "ffdhe4096", 00:06:17.659 "ffdhe6144", 00:06:17.659 "ffdhe8192" 00:06:17.659 ] 00:06:17.659 } 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "method": "bdev_nvme_set_hotplug", 00:06:17.659 "params": { 00:06:17.659 "period_us": 100000, 00:06:17.659 "enable": false 00:06:17.659 } 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "method": "bdev_wait_for_examine" 00:06:17.659 } 00:06:17.659 ] 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "subsystem": "scsi", 00:06:17.659 "config": null 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "subsystem": "scheduler", 00:06:17.659 "config": [ 00:06:17.659 { 00:06:17.659 "method": "framework_set_scheduler", 00:06:17.659 "params": { 00:06:17.659 "name": "static" 00:06:17.659 } 00:06:17.659 } 00:06:17.659 ] 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "subsystem": "vhost_scsi", 00:06:17.659 "config": [] 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "subsystem": "vhost_blk", 00:06:17.659 "config": [] 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "subsystem": "ublk", 00:06:17.659 "config": [] 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "subsystem": "nbd", 00:06:17.659 "config": [] 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "subsystem": "nvmf", 00:06:17.659 "config": [ 00:06:17.659 { 00:06:17.659 "method": "nvmf_set_config", 00:06:17.659 "params": { 00:06:17.659 "discovery_filter": "match_any", 00:06:17.659 "admin_cmd_passthru": { 00:06:17.659 "identify_ctrlr": false 00:06:17.659 } 00:06:17.659 } 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "method": "nvmf_set_max_subsystems", 00:06:17.659 "params": { 00:06:17.659 "max_subsystems": 1024 00:06:17.659 } 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "method": "nvmf_set_crdt", 00:06:17.659 "params": { 00:06:17.659 "crdt1": 0, 00:06:17.659 "crdt2": 0, 00:06:17.659 "crdt3": 0 00:06:17.659 } 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "method": "nvmf_create_transport", 00:06:17.659 "params": { 00:06:17.659 "trtype": "TCP", 00:06:17.659 "max_queue_depth": 128, 00:06:17.659 "max_io_qpairs_per_ctrlr": 127, 00:06:17.659 "in_capsule_data_size": 4096, 00:06:17.659 "max_io_size": 131072, 00:06:17.659 "io_unit_size": 131072, 00:06:17.659 "max_aq_depth": 128, 00:06:17.659 "num_shared_buffers": 511, 00:06:17.659 "buf_cache_size": 4294967295, 00:06:17.659 "dif_insert_or_strip": false, 00:06:17.659 "zcopy": false, 00:06:17.659 "c2h_success": true, 00:06:17.659 "sock_priority": 0, 00:06:17.659 "abort_timeout_sec": 1, 00:06:17.659 "ack_timeout": 0, 00:06:17.659 "data_wr_pool_size": 0 00:06:17.659 } 00:06:17.659 } 00:06:17.659 ] 00:06:17.659 }, 00:06:17.659 { 00:06:17.659 "subsystem": "iscsi", 00:06:17.659 "config": [ 00:06:17.659 { 00:06:17.659 "method": "iscsi_set_options", 00:06:17.659 "params": { 00:06:17.659 "node_base": "iqn.2016-06.io.spdk", 00:06:17.659 "max_sessions": 128, 00:06:17.659 "max_connections_per_session": 2, 00:06:17.659 "max_queue_depth": 64, 00:06:17.659 "default_time2wait": 2, 00:06:17.659 "default_time2retain": 20, 00:06:17.659 "first_burst_length": 8192, 00:06:17.659 "immediate_data": true, 00:06:17.659 "allow_duplicated_isid": false, 00:06:17.659 "error_recovery_level": 0, 00:06:17.659 "nop_timeout": 60, 00:06:17.659 "nop_in_interval": 30, 00:06:17.659 "disable_chap": false, 00:06:17.659 "require_chap": false, 00:06:17.659 "mutual_chap": false, 00:06:17.659 "chap_group": 0, 00:06:17.659 "max_large_datain_per_connection": 64, 00:06:17.659 "max_r2t_per_connection": 4, 00:06:17.659 "pdu_pool_size": 36864, 00:06:17.659 "immediate_data_pool_size": 16384, 00:06:17.659 "data_out_pool_size": 2048 00:06:17.659 } 00:06:17.659 } 00:06:17.659 ] 00:06:17.659 } 00:06:17.659 ] 00:06:17.659 } 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3379952 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3379952 ']' 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3379952 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3379952 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3379952' 00:06:17.659 killing process with pid 3379952 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3379952 00:06:17.659 20:13:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3379952 00:06:19.570 20:13:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3380632 00:06:19.570 20:13:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:19.570 20:13:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:24.856 20:13:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3380632 00:06:24.856 20:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3380632 ']' 00:06:24.856 20:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3380632 00:06:24.856 20:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:24.856 20:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.856 20:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3380632 00:06:24.856 20:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.856 20:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.856 20:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3380632' 00:06:24.856 killing process with pid 3380632 00:06:24.856 20:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3380632 00:06:24.856 20:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3380632 00:06:25.798 20:13:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:25.798 20:13:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:25.798 00:06:25.798 real 0m9.518s 00:06:25.798 user 0m9.164s 00:06:25.798 sys 0m0.749s 00:06:25.799 20:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.799 20:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.799 ************************************ 00:06:25.799 END TEST skip_rpc_with_json 00:06:25.799 ************************************ 00:06:26.060 20:13:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:26.060 20:13:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:26.060 20:13:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.060 20:13:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.060 20:13:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.060 ************************************ 00:06:26.060 START TEST skip_rpc_with_delay 00:06:26.060 ************************************ 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:26.060 20:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:26.060 [2024-07-22 20:13:37.991279] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:26.060 [2024-07-22 20:13:37.991403] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:26.060 20:13:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:26.060 20:13:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.060 20:13:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.060 20:13:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.060 00:06:26.060 real 0m0.154s 00:06:26.060 user 0m0.084s 00:06:26.060 sys 0m0.068s 00:06:26.060 20:13:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.060 20:13:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:26.060 ************************************ 00:06:26.060 END TEST skip_rpc_with_delay 00:06:26.060 ************************************ 00:06:26.322 20:13:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:26.322 20:13:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:26.322 20:13:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:26.322 20:13:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:26.322 20:13:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.322 20:13:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.322 20:13:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.322 ************************************ 00:06:26.322 START TEST exit_on_failed_rpc_init 00:06:26.322 ************************************ 00:06:26.322 20:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:26.322 20:13:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3382035 00:06:26.322 20:13:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3382035 00:06:26.322 20:13:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.322 20:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3382035 ']' 00:06:26.322 20:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.322 20:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.322 20:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.322 20:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.322 20:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:26.322 [2024-07-22 20:13:38.242749] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:26.322 [2024-07-22 20:13:38.242877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382035 ] 00:06:26.322 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.584 [2024-07-22 20:13:38.366908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.584 [2024-07-22 20:13:38.545939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:27.156 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:27.416 [2024-07-22 20:13:39.222169] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:27.417 [2024-07-22 20:13:39.222287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382368 ] 00:06:27.417 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.417 [2024-07-22 20:13:39.350250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.677 [2024-07-22 20:13:39.525903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.677 [2024-07-22 20:13:39.525989] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:27.677 [2024-07-22 20:13:39.526004] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:27.677 [2024-07-22 20:13:39.526016] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3382035 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3382035 ']' 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3382035 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3382035 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3382035' 00:06:27.937 killing process with pid 3382035 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3382035 00:06:27.937 20:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3382035 00:06:29.852 00:06:29.852 real 0m3.366s 00:06:29.852 user 0m3.820s 00:06:29.852 sys 0m0.602s 00:06:29.852 20:13:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.852 20:13:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:29.852 ************************************ 00:06:29.852 END TEST exit_on_failed_rpc_init 00:06:29.852 ************************************ 00:06:29.852 20:13:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:29.852 20:13:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:29.852 00:06:29.852 real 0m20.147s 00:06:29.852 user 0m19.564s 00:06:29.852 sys 0m2.088s 00:06:29.852 20:13:41 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.852 20:13:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.852 ************************************ 00:06:29.852 END TEST skip_rpc 00:06:29.852 ************************************ 00:06:29.852 20:13:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.852 20:13:41 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:29.852 20:13:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.852 20:13:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.852 20:13:41 -- common/autotest_common.sh@10 -- # set +x 00:06:29.852 ************************************ 00:06:29.852 START TEST rpc_client 00:06:29.852 ************************************ 00:06:29.852 20:13:41 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:29.852 * Looking for test storage... 00:06:29.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:29.852 20:13:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:29.852 OK 00:06:29.852 20:13:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:29.852 00:06:29.852 real 0m0.162s 00:06:29.852 user 0m0.073s 00:06:29.852 sys 0m0.098s 00:06:29.852 20:13:41 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.852 20:13:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:29.852 ************************************ 00:06:29.852 END TEST rpc_client 00:06:29.852 ************************************ 00:06:29.852 20:13:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.852 20:13:41 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:29.852 20:13:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.852 20:13:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.852 20:13:41 -- common/autotest_common.sh@10 -- # set +x 00:06:29.852 ************************************ 00:06:29.852 START TEST json_config 00:06:29.852 ************************************ 00:06:29.852 20:13:41 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:30.113 20:13:41 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.113 20:13:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:30.113 20:13:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.114 20:13:41 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.114 20:13:41 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.114 20:13:41 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.114 20:13:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.114 20:13:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.114 20:13:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.114 20:13:41 json_config -- paths/export.sh@5 -- # export PATH 00:06:30.114 20:13:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@47 -- # : 0 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:30.114 20:13:41 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:30.114 INFO: JSON configuration test init 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:30.114 20:13:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.114 20:13:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:30.114 20:13:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.114 20:13:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.114 20:13:41 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:30.114 20:13:41 json_config -- json_config/common.sh@9 -- # local app=target 00:06:30.114 20:13:41 json_config -- json_config/common.sh@10 -- # shift 00:06:30.114 20:13:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.114 20:13:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.114 20:13:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.114 20:13:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.114 20:13:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.114 20:13:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3382901 00:06:30.114 20:13:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.114 Waiting for target to run... 00:06:30.114 20:13:41 json_config -- json_config/common.sh@25 -- # waitforlisten 3382901 /var/tmp/spdk_tgt.sock 00:06:30.114 20:13:41 json_config -- common/autotest_common.sh@829 -- # '[' -z 3382901 ']' 00:06:30.114 20:13:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:30.114 20:13:41 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.114 20:13:41 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.114 20:13:41 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.114 20:13:41 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.114 20:13:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.114 [2024-07-22 20:13:42.071763] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:30.114 [2024-07-22 20:13:42.071886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382901 ] 00:06:30.114 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.375 [2024-07-22 20:13:42.362450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.636 [2024-07-22 20:13:42.530791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.896 20:13:42 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.896 20:13:42 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:30.896 20:13:42 json_config -- json_config/common.sh@26 -- # echo '' 00:06:30.896 00:06:30.896 20:13:42 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:30.896 20:13:42 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:30.896 20:13:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.896 20:13:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.896 20:13:42 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:30.896 20:13:42 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:30.896 20:13:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:30.896 20:13:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.896 20:13:42 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:30.896 20:13:42 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:30.896 20:13:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:31.840 20:13:43 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:31.840 20:13:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:31.840 20:13:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:31.840 20:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.840 20:13:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:31.840 20:13:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:31.840 20:13:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:31.840 20:13:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:31.840 20:13:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:31.840 20:13:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@51 -- # sort 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:32.101 20:13:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:32.101 20:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:32.101 20:13:43 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:32.102 20:13:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:32.102 20:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.102 20:13:43 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:32.102 20:13:43 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:32.102 20:13:43 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:32.102 20:13:43 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:32.102 20:13:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:32.362 MallocForNvmf0 00:06:32.362 20:13:44 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:32.362 20:13:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:32.362 MallocForNvmf1 00:06:32.362 20:13:44 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:32.362 20:13:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:32.623 [2024-07-22 20:13:44.432046] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.623 20:13:44 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:32.623 20:13:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:32.623 20:13:44 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:32.623 20:13:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:32.884 20:13:44 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:32.884 20:13:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:32.884 20:13:44 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:32.884 20:13:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:33.146 [2024-07-22 20:13:45.042107] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:33.146 20:13:45 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:33.146 20:13:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:33.146 20:13:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.146 20:13:45 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:33.146 20:13:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:33.146 20:13:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.146 20:13:45 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:33.146 20:13:45 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:33.146 20:13:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:33.406 MallocBdevForConfigChangeCheck 00:06:33.406 20:13:45 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:33.406 20:13:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:33.406 20:13:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.406 20:13:45 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:33.406 20:13:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.667 20:13:45 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:33.667 INFO: shutting down applications... 00:06:33.667 20:13:45 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:33.667 20:13:45 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:33.667 20:13:45 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:33.667 20:13:45 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:34.238 Calling clear_iscsi_subsystem 00:06:34.238 Calling clear_nvmf_subsystem 00:06:34.238 Calling clear_nbd_subsystem 00:06:34.238 Calling clear_ublk_subsystem 00:06:34.238 Calling clear_vhost_blk_subsystem 00:06:34.238 Calling clear_vhost_scsi_subsystem 00:06:34.238 Calling clear_bdev_subsystem 00:06:34.238 20:13:46 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:34.238 20:13:46 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:34.238 20:13:46 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:34.238 20:13:46 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.238 20:13:46 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:34.238 20:13:46 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:34.499 20:13:46 json_config -- json_config/json_config.sh@349 -- # break 00:06:34.499 20:13:46 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:34.499 20:13:46 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:34.499 20:13:46 json_config -- json_config/common.sh@31 -- # local app=target 00:06:34.499 20:13:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:34.499 20:13:46 json_config -- json_config/common.sh@35 -- # [[ -n 3382901 ]] 00:06:34.499 20:13:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3382901 00:06:34.499 20:13:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:34.499 20:13:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.499 20:13:46 json_config -- json_config/common.sh@41 -- # kill -0 3382901 00:06:34.499 20:13:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:35.070 20:13:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:35.070 20:13:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.070 20:13:46 json_config -- json_config/common.sh@41 -- # kill -0 3382901 00:06:35.070 20:13:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:35.641 20:13:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:35.641 20:13:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.641 20:13:47 json_config -- json_config/common.sh@41 -- # kill -0 3382901 00:06:35.641 20:13:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:35.641 20:13:47 json_config -- json_config/common.sh@43 -- # break 00:06:35.641 20:13:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:35.641 20:13:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:35.641 SPDK target shutdown done 00:06:35.641 20:13:47 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:35.641 INFO: relaunching applications... 00:06:35.641 20:13:47 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.641 20:13:47 json_config -- json_config/common.sh@9 -- # local app=target 00:06:35.641 20:13:47 json_config -- json_config/common.sh@10 -- # shift 00:06:35.642 20:13:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:35.642 20:13:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:35.642 20:13:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:35.642 20:13:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.642 20:13:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.642 20:13:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3384241 00:06:35.642 20:13:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:35.642 Waiting for target to run... 00:06:35.642 20:13:47 json_config -- json_config/common.sh@25 -- # waitforlisten 3384241 /var/tmp/spdk_tgt.sock 00:06:35.642 20:13:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.642 20:13:47 json_config -- common/autotest_common.sh@829 -- # '[' -z 3384241 ']' 00:06:35.642 20:13:47 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:35.642 20:13:47 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.642 20:13:47 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:35.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:35.642 20:13:47 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.642 20:13:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.642 [2024-07-22 20:13:47.447405] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:35.642 [2024-07-22 20:13:47.447524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384241 ] 00:06:35.642 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.902 [2024-07-22 20:13:47.773840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.163 [2024-07-22 20:13:47.950696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.105 [2024-07-22 20:13:48.881434] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.105 [2024-07-22 20:13:48.913845] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:37.105 20:13:48 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.105 20:13:48 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:37.105 20:13:48 json_config -- json_config/common.sh@26 -- # echo '' 00:06:37.105 00:06:37.105 20:13:48 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:37.105 20:13:48 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:37.105 INFO: Checking if target configuration is the same... 00:06:37.105 20:13:48 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.105 20:13:48 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:37.105 20:13:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.105 + '[' 2 -ne 2 ']' 00:06:37.105 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:37.105 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:37.105 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:37.105 +++ basename /dev/fd/62 00:06:37.105 ++ mktemp /tmp/62.XXX 00:06:37.105 + tmp_file_1=/tmp/62.8eN 00:06:37.105 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.105 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:37.105 + tmp_file_2=/tmp/spdk_tgt_config.json.hiT 00:06:37.105 + ret=0 00:06:37.105 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:37.366 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:37.366 + diff -u /tmp/62.8eN /tmp/spdk_tgt_config.json.hiT 00:06:37.366 + echo 'INFO: JSON config files are the same' 00:06:37.366 INFO: JSON config files are the same 00:06:37.366 + rm /tmp/62.8eN /tmp/spdk_tgt_config.json.hiT 00:06:37.366 + exit 0 00:06:37.366 20:13:49 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:37.366 20:13:49 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:37.366 INFO: changing configuration and checking if this can be detected... 00:06:37.366 20:13:49 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:37.366 20:13:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:37.627 20:13:49 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:37.627 20:13:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.627 20:13:49 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.627 + '[' 2 -ne 2 ']' 00:06:37.627 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:37.627 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:37.627 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:37.627 +++ basename /dev/fd/62 00:06:37.627 ++ mktemp /tmp/62.XXX 00:06:37.627 + tmp_file_1=/tmp/62.av6 00:06:37.627 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.627 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:37.627 + tmp_file_2=/tmp/spdk_tgt_config.json.5VA 00:06:37.627 + ret=0 00:06:37.627 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:37.887 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:37.887 + diff -u /tmp/62.av6 /tmp/spdk_tgt_config.json.5VA 00:06:37.887 + ret=1 00:06:37.887 + echo '=== Start of file: /tmp/62.av6 ===' 00:06:37.887 + cat /tmp/62.av6 00:06:37.887 + echo '=== End of file: /tmp/62.av6 ===' 00:06:37.887 + echo '' 00:06:37.887 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5VA ===' 00:06:37.887 + cat /tmp/spdk_tgt_config.json.5VA 00:06:37.887 + echo '=== End of file: /tmp/spdk_tgt_config.json.5VA ===' 00:06:37.887 + echo '' 00:06:37.887 + rm /tmp/62.av6 /tmp/spdk_tgt_config.json.5VA 00:06:37.887 + exit 1 00:06:37.887 20:13:49 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:37.887 INFO: configuration change detected. 00:06:37.887 20:13:49 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:37.888 20:13:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:37.888 20:13:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@321 -- # [[ -n 3384241 ]] 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:37.888 20:13:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:37.888 20:13:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:37.888 20:13:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:37.888 20:13:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.888 20:13:49 json_config -- json_config/json_config.sh@327 -- # killprocess 3384241 00:06:37.888 20:13:49 json_config -- common/autotest_common.sh@948 -- # '[' -z 3384241 ']' 00:06:37.888 20:13:49 json_config -- common/autotest_common.sh@952 -- # kill -0 3384241 00:06:37.888 20:13:49 json_config -- common/autotest_common.sh@953 -- # uname 00:06:37.888 20:13:49 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.888 20:13:49 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3384241 00:06:38.148 20:13:49 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.148 20:13:49 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.148 20:13:49 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3384241' 00:06:38.148 killing process with pid 3384241 00:06:38.148 20:13:49 json_config -- common/autotest_common.sh@967 -- # kill 3384241 00:06:38.148 20:13:49 json_config -- common/autotest_common.sh@972 -- # wait 3384241 00:06:39.091 20:13:50 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:39.091 20:13:50 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:39.091 20:13:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:39.091 20:13:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.091 20:13:50 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:39.091 20:13:50 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:39.091 INFO: Success 00:06:39.091 00:06:39.091 real 0m8.934s 00:06:39.091 user 0m10.093s 00:06:39.091 sys 0m1.949s 00:06:39.091 20:13:50 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.091 20:13:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.091 ************************************ 00:06:39.091 END TEST json_config 00:06:39.091 ************************************ 00:06:39.091 20:13:50 -- common/autotest_common.sh@1142 -- # return 0 00:06:39.091 20:13:50 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:39.091 20:13:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.091 20:13:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.091 20:13:50 -- common/autotest_common.sh@10 -- # set +x 00:06:39.091 ************************************ 00:06:39.091 START TEST json_config_extra_key 00:06:39.091 ************************************ 00:06:39.091 20:13:50 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:39.091 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:39.091 20:13:50 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:39.091 20:13:50 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.091 20:13:50 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.091 20:13:50 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.092 20:13:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.092 20:13:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.092 20:13:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.092 20:13:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:39.092 20:13:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.092 20:13:50 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:39.092 20:13:50 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:39.092 20:13:50 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:39.092 20:13:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:39.092 20:13:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:39.092 20:13:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:39.092 20:13:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:39.092 20:13:50 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:39.092 20:13:50 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:39.092 INFO: launching applications... 00:06:39.092 20:13:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:39.092 20:13:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:39.092 20:13:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:39.092 20:13:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:39.092 20:13:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:39.092 20:13:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:39.092 20:13:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:39.092 20:13:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:39.092 20:13:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3385075 00:06:39.092 20:13:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:39.092 Waiting for target to run... 00:06:39.092 20:13:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3385075 /var/tmp/spdk_tgt.sock 00:06:39.092 20:13:50 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3385075 ']' 00:06:39.092 20:13:50 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:39.092 20:13:50 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.092 20:13:50 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:39.092 20:13:50 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:39.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:39.092 20:13:50 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.092 20:13:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:39.092 [2024-07-22 20:13:51.081802] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:39.092 [2024-07-22 20:13:51.081944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385075 ] 00:06:39.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.614 [2024-07-22 20:13:51.448179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.614 [2024-07-22 20:13:51.617358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.186 20:13:52 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.186 20:13:52 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:40.186 20:13:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:40.186 00:06:40.186 20:13:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:40.186 INFO: shutting down applications... 00:06:40.186 20:13:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:40.186 20:13:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:40.186 20:13:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:40.186 20:13:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3385075 ]] 00:06:40.186 20:13:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3385075 00:06:40.186 20:13:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:40.186 20:13:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:40.186 20:13:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3385075 00:06:40.186 20:13:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:40.761 20:13:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:40.761 20:13:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:40.761 20:13:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3385075 00:06:40.761 20:13:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:41.410 20:13:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:41.410 20:13:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:41.410 20:13:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3385075 00:06:41.410 20:13:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:41.671 20:13:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:41.671 20:13:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:41.671 20:13:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3385075 00:06:41.671 20:13:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:42.242 20:13:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:42.242 20:13:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.242 20:13:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3385075 00:06:42.242 20:13:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:42.242 20:13:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:42.242 20:13:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:42.242 20:13:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:42.242 SPDK target shutdown done 00:06:42.242 20:13:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:42.242 Success 00:06:42.242 00:06:42.242 real 0m3.252s 00:06:42.242 user 0m2.823s 00:06:42.242 sys 0m0.597s 00:06:42.242 20:13:54 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.242 20:13:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:42.242 ************************************ 00:06:42.242 END TEST json_config_extra_key 00:06:42.242 ************************************ 00:06:42.242 20:13:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:42.242 20:13:54 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:42.242 20:13:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.242 20:13:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.242 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:06:42.242 ************************************ 00:06:42.242 START TEST alias_rpc 00:06:42.242 ************************************ 00:06:42.242 20:13:54 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:42.503 * Looking for test storage... 00:06:42.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:42.503 20:13:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.503 20:13:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3385804 00:06:42.503 20:13:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3385804 00:06:42.503 20:13:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.503 20:13:54 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3385804 ']' 00:06:42.503 20:13:54 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.503 20:13:54 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.503 20:13:54 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.503 20:13:54 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.503 20:13:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.503 [2024-07-22 20:13:54.398826] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:42.503 [2024-07-22 20:13:54.398955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385804 ] 00:06:42.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.503 [2024-07-22 20:13:54.516878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.763 [2024-07-22 20:13:54.691800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.335 20:13:55 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.335 20:13:55 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.335 20:13:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:43.596 20:13:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3385804 00:06:43.596 20:13:55 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3385804 ']' 00:06:43.596 20:13:55 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3385804 00:06:43.596 20:13:55 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:43.596 20:13:55 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.596 20:13:55 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3385804 00:06:43.596 20:13:55 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.596 20:13:55 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.596 20:13:55 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3385804' 00:06:43.596 killing process with pid 3385804 00:06:43.596 20:13:55 alias_rpc -- common/autotest_common.sh@967 -- # kill 3385804 00:06:43.596 20:13:55 alias_rpc -- common/autotest_common.sh@972 -- # wait 3385804 00:06:45.510 00:06:45.510 real 0m2.941s 00:06:45.510 user 0m2.962s 00:06:45.510 sys 0m0.500s 00:06:45.510 20:13:57 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.510 20:13:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.510 ************************************ 00:06:45.510 END TEST alias_rpc 00:06:45.510 ************************************ 00:06:45.510 20:13:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:45.510 20:13:57 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:45.510 20:13:57 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:45.510 20:13:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.510 20:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.510 20:13:57 -- common/autotest_common.sh@10 -- # set +x 00:06:45.510 ************************************ 00:06:45.510 START TEST spdkcli_tcp 00:06:45.510 ************************************ 00:06:45.510 20:13:57 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:45.510 * Looking for test storage... 00:06:45.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:45.510 20:13:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:45.510 20:13:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:45.510 20:13:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:45.510 20:13:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:45.510 20:13:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:45.510 20:13:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:45.510 20:13:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:45.510 20:13:57 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:45.510 20:13:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.510 20:13:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3386436 00:06:45.510 20:13:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3386436 00:06:45.510 20:13:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:45.510 20:13:57 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3386436 ']' 00:06:45.510 20:13:57 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.510 20:13:57 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.510 20:13:57 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.510 20:13:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.510 20:13:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.510 [2024-07-22 20:13:57.435213] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:45.510 [2024-07-22 20:13:57.435355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386436 ] 00:06:45.510 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.770 [2024-07-22 20:13:57.557486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.770 [2024-07-22 20:13:57.734261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.770 [2024-07-22 20:13:57.734263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.343 20:13:58 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.343 20:13:58 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:46.343 20:13:58 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3386538 00:06:46.343 20:13:58 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:46.343 20:13:58 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:46.604 [ 00:06:46.604 "bdev_malloc_delete", 00:06:46.604 "bdev_malloc_create", 00:06:46.604 "bdev_null_resize", 00:06:46.604 "bdev_null_delete", 00:06:46.604 "bdev_null_create", 00:06:46.604 "bdev_nvme_cuse_unregister", 00:06:46.604 "bdev_nvme_cuse_register", 00:06:46.604 "bdev_opal_new_user", 00:06:46.604 "bdev_opal_set_lock_state", 00:06:46.604 "bdev_opal_delete", 00:06:46.604 "bdev_opal_get_info", 00:06:46.604 "bdev_opal_create", 00:06:46.604 "bdev_nvme_opal_revert", 00:06:46.604 "bdev_nvme_opal_init", 00:06:46.604 "bdev_nvme_send_cmd", 00:06:46.604 "bdev_nvme_get_path_iostat", 00:06:46.604 "bdev_nvme_get_mdns_discovery_info", 00:06:46.604 "bdev_nvme_stop_mdns_discovery", 00:06:46.604 "bdev_nvme_start_mdns_discovery", 00:06:46.604 "bdev_nvme_set_multipath_policy", 00:06:46.604 "bdev_nvme_set_preferred_path", 00:06:46.604 "bdev_nvme_get_io_paths", 00:06:46.604 "bdev_nvme_remove_error_injection", 00:06:46.604 "bdev_nvme_add_error_injection", 00:06:46.604 "bdev_nvme_get_discovery_info", 00:06:46.604 "bdev_nvme_stop_discovery", 00:06:46.605 "bdev_nvme_start_discovery", 00:06:46.605 "bdev_nvme_get_controller_health_info", 00:06:46.605 "bdev_nvme_disable_controller", 00:06:46.605 "bdev_nvme_enable_controller", 00:06:46.605 "bdev_nvme_reset_controller", 00:06:46.605 "bdev_nvme_get_transport_statistics", 00:06:46.605 "bdev_nvme_apply_firmware", 00:06:46.605 "bdev_nvme_detach_controller", 00:06:46.605 "bdev_nvme_get_controllers", 00:06:46.605 "bdev_nvme_attach_controller", 00:06:46.605 "bdev_nvme_set_hotplug", 00:06:46.605 "bdev_nvme_set_options", 00:06:46.605 "bdev_passthru_delete", 00:06:46.605 "bdev_passthru_create", 00:06:46.605 "bdev_lvol_set_parent_bdev", 00:06:46.605 "bdev_lvol_set_parent", 00:06:46.605 "bdev_lvol_check_shallow_copy", 00:06:46.605 "bdev_lvol_start_shallow_copy", 00:06:46.605 "bdev_lvol_grow_lvstore", 00:06:46.605 "bdev_lvol_get_lvols", 00:06:46.605 "bdev_lvol_get_lvstores", 00:06:46.605 "bdev_lvol_delete", 00:06:46.605 "bdev_lvol_set_read_only", 00:06:46.605 "bdev_lvol_resize", 00:06:46.605 "bdev_lvol_decouple_parent", 00:06:46.605 "bdev_lvol_inflate", 00:06:46.605 "bdev_lvol_rename", 00:06:46.605 "bdev_lvol_clone_bdev", 00:06:46.605 "bdev_lvol_clone", 00:06:46.605 "bdev_lvol_snapshot", 00:06:46.605 "bdev_lvol_create", 00:06:46.605 "bdev_lvol_delete_lvstore", 00:06:46.605 "bdev_lvol_rename_lvstore", 00:06:46.605 "bdev_lvol_create_lvstore", 00:06:46.605 "bdev_raid_set_options", 00:06:46.605 "bdev_raid_remove_base_bdev", 00:06:46.605 "bdev_raid_add_base_bdev", 00:06:46.605 "bdev_raid_delete", 00:06:46.605 "bdev_raid_create", 00:06:46.605 "bdev_raid_get_bdevs", 00:06:46.605 "bdev_error_inject_error", 00:06:46.605 "bdev_error_delete", 00:06:46.605 "bdev_error_create", 00:06:46.605 "bdev_split_delete", 00:06:46.605 "bdev_split_create", 00:06:46.605 "bdev_delay_delete", 00:06:46.605 "bdev_delay_create", 00:06:46.605 "bdev_delay_update_latency", 00:06:46.605 "bdev_zone_block_delete", 00:06:46.605 "bdev_zone_block_create", 00:06:46.605 "blobfs_create", 00:06:46.605 "blobfs_detect", 00:06:46.605 "blobfs_set_cache_size", 00:06:46.605 "bdev_aio_delete", 00:06:46.605 "bdev_aio_rescan", 00:06:46.605 "bdev_aio_create", 00:06:46.605 "bdev_ftl_set_property", 00:06:46.605 "bdev_ftl_get_properties", 00:06:46.605 "bdev_ftl_get_stats", 00:06:46.605 "bdev_ftl_unmap", 00:06:46.605 "bdev_ftl_unload", 00:06:46.605 "bdev_ftl_delete", 00:06:46.605 "bdev_ftl_load", 00:06:46.605 "bdev_ftl_create", 00:06:46.605 "bdev_virtio_attach_controller", 00:06:46.605 "bdev_virtio_scsi_get_devices", 00:06:46.605 "bdev_virtio_detach_controller", 00:06:46.605 "bdev_virtio_blk_set_hotplug", 00:06:46.605 "bdev_iscsi_delete", 00:06:46.605 "bdev_iscsi_create", 00:06:46.605 "bdev_iscsi_set_options", 00:06:46.605 "accel_error_inject_error", 00:06:46.605 "ioat_scan_accel_module", 00:06:46.605 "dsa_scan_accel_module", 00:06:46.605 "iaa_scan_accel_module", 00:06:46.605 "keyring_file_remove_key", 00:06:46.605 "keyring_file_add_key", 00:06:46.605 "keyring_linux_set_options", 00:06:46.605 "iscsi_get_histogram", 00:06:46.605 "iscsi_enable_histogram", 00:06:46.605 "iscsi_set_options", 00:06:46.605 "iscsi_get_auth_groups", 00:06:46.605 "iscsi_auth_group_remove_secret", 00:06:46.605 "iscsi_auth_group_add_secret", 00:06:46.605 "iscsi_delete_auth_group", 00:06:46.605 "iscsi_create_auth_group", 00:06:46.605 "iscsi_set_discovery_auth", 00:06:46.605 "iscsi_get_options", 00:06:46.605 "iscsi_target_node_request_logout", 00:06:46.605 "iscsi_target_node_set_redirect", 00:06:46.605 "iscsi_target_node_set_auth", 00:06:46.605 "iscsi_target_node_add_lun", 00:06:46.605 "iscsi_get_stats", 00:06:46.605 "iscsi_get_connections", 00:06:46.605 "iscsi_portal_group_set_auth", 00:06:46.605 "iscsi_start_portal_group", 00:06:46.605 "iscsi_delete_portal_group", 00:06:46.605 "iscsi_create_portal_group", 00:06:46.605 "iscsi_get_portal_groups", 00:06:46.605 "iscsi_delete_target_node", 00:06:46.605 "iscsi_target_node_remove_pg_ig_maps", 00:06:46.605 "iscsi_target_node_add_pg_ig_maps", 00:06:46.605 "iscsi_create_target_node", 00:06:46.605 "iscsi_get_target_nodes", 00:06:46.605 "iscsi_delete_initiator_group", 00:06:46.605 "iscsi_initiator_group_remove_initiators", 00:06:46.605 "iscsi_initiator_group_add_initiators", 00:06:46.605 "iscsi_create_initiator_group", 00:06:46.605 "iscsi_get_initiator_groups", 00:06:46.605 "nvmf_set_crdt", 00:06:46.605 "nvmf_set_config", 00:06:46.605 "nvmf_set_max_subsystems", 00:06:46.605 "nvmf_stop_mdns_prr", 00:06:46.605 "nvmf_publish_mdns_prr", 00:06:46.605 "nvmf_subsystem_get_listeners", 00:06:46.605 "nvmf_subsystem_get_qpairs", 00:06:46.605 "nvmf_subsystem_get_controllers", 00:06:46.605 "nvmf_get_stats", 00:06:46.605 "nvmf_get_transports", 00:06:46.605 "nvmf_create_transport", 00:06:46.605 "nvmf_get_targets", 00:06:46.605 "nvmf_delete_target", 00:06:46.605 "nvmf_create_target", 00:06:46.605 "nvmf_subsystem_allow_any_host", 00:06:46.605 "nvmf_subsystem_remove_host", 00:06:46.605 "nvmf_subsystem_add_host", 00:06:46.605 "nvmf_ns_remove_host", 00:06:46.605 "nvmf_ns_add_host", 00:06:46.605 "nvmf_subsystem_remove_ns", 00:06:46.605 "nvmf_subsystem_add_ns", 00:06:46.605 "nvmf_subsystem_listener_set_ana_state", 00:06:46.605 "nvmf_discovery_get_referrals", 00:06:46.605 "nvmf_discovery_remove_referral", 00:06:46.605 "nvmf_discovery_add_referral", 00:06:46.605 "nvmf_subsystem_remove_listener", 00:06:46.605 "nvmf_subsystem_add_listener", 00:06:46.605 "nvmf_delete_subsystem", 00:06:46.605 "nvmf_create_subsystem", 00:06:46.605 "nvmf_get_subsystems", 00:06:46.605 "env_dpdk_get_mem_stats", 00:06:46.605 "nbd_get_disks", 00:06:46.605 "nbd_stop_disk", 00:06:46.605 "nbd_start_disk", 00:06:46.605 "ublk_recover_disk", 00:06:46.605 "ublk_get_disks", 00:06:46.605 "ublk_stop_disk", 00:06:46.605 "ublk_start_disk", 00:06:46.605 "ublk_destroy_target", 00:06:46.605 "ublk_create_target", 00:06:46.605 "virtio_blk_create_transport", 00:06:46.605 "virtio_blk_get_transports", 00:06:46.605 "vhost_controller_set_coalescing", 00:06:46.605 "vhost_get_controllers", 00:06:46.605 "vhost_delete_controller", 00:06:46.605 "vhost_create_blk_controller", 00:06:46.605 "vhost_scsi_controller_remove_target", 00:06:46.605 "vhost_scsi_controller_add_target", 00:06:46.605 "vhost_start_scsi_controller", 00:06:46.605 "vhost_create_scsi_controller", 00:06:46.605 "thread_set_cpumask", 00:06:46.605 "framework_get_governor", 00:06:46.605 "framework_get_scheduler", 00:06:46.605 "framework_set_scheduler", 00:06:46.605 "framework_get_reactors", 00:06:46.605 "thread_get_io_channels", 00:06:46.605 "thread_get_pollers", 00:06:46.605 "thread_get_stats", 00:06:46.605 "framework_monitor_context_switch", 00:06:46.605 "spdk_kill_instance", 00:06:46.605 "log_enable_timestamps", 00:06:46.605 "log_get_flags", 00:06:46.605 "log_clear_flag", 00:06:46.605 "log_set_flag", 00:06:46.605 "log_get_level", 00:06:46.605 "log_set_level", 00:06:46.605 "log_get_print_level", 00:06:46.605 "log_set_print_level", 00:06:46.605 "framework_enable_cpumask_locks", 00:06:46.605 "framework_disable_cpumask_locks", 00:06:46.605 "framework_wait_init", 00:06:46.605 "framework_start_init", 00:06:46.605 "scsi_get_devices", 00:06:46.605 "bdev_get_histogram", 00:06:46.605 "bdev_enable_histogram", 00:06:46.605 "bdev_set_qos_limit", 00:06:46.605 "bdev_set_qd_sampling_period", 00:06:46.605 "bdev_get_bdevs", 00:06:46.605 "bdev_reset_iostat", 00:06:46.605 "bdev_get_iostat", 00:06:46.605 "bdev_examine", 00:06:46.605 "bdev_wait_for_examine", 00:06:46.605 "bdev_set_options", 00:06:46.605 "notify_get_notifications", 00:06:46.605 "notify_get_types", 00:06:46.605 "accel_get_stats", 00:06:46.605 "accel_set_options", 00:06:46.605 "accel_set_driver", 00:06:46.605 "accel_crypto_key_destroy", 00:06:46.605 "accel_crypto_keys_get", 00:06:46.605 "accel_crypto_key_create", 00:06:46.605 "accel_assign_opc", 00:06:46.605 "accel_get_module_info", 00:06:46.605 "accel_get_opc_assignments", 00:06:46.605 "vmd_rescan", 00:06:46.605 "vmd_remove_device", 00:06:46.605 "vmd_enable", 00:06:46.605 "sock_get_default_impl", 00:06:46.605 "sock_set_default_impl", 00:06:46.605 "sock_impl_set_options", 00:06:46.605 "sock_impl_get_options", 00:06:46.605 "iobuf_get_stats", 00:06:46.605 "iobuf_set_options", 00:06:46.605 "framework_get_pci_devices", 00:06:46.605 "framework_get_config", 00:06:46.605 "framework_get_subsystems", 00:06:46.605 "trace_get_info", 00:06:46.605 "trace_get_tpoint_group_mask", 00:06:46.605 "trace_disable_tpoint_group", 00:06:46.605 "trace_enable_tpoint_group", 00:06:46.605 "trace_clear_tpoint_mask", 00:06:46.605 "trace_set_tpoint_mask", 00:06:46.605 "keyring_get_keys", 00:06:46.605 "spdk_get_version", 00:06:46.605 "rpc_get_methods" 00:06:46.605 ] 00:06:46.605 20:13:58 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:46.605 20:13:58 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:46.605 20:13:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.605 20:13:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:46.605 20:13:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3386436 00:06:46.605 20:13:58 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3386436 ']' 00:06:46.605 20:13:58 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3386436 00:06:46.605 20:13:58 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:46.605 20:13:58 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.606 20:13:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3386436 00:06:46.606 20:13:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.606 20:13:58 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.606 20:13:58 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3386436' 00:06:46.606 killing process with pid 3386436 00:06:46.606 20:13:58 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3386436 00:06:46.606 20:13:58 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3386436 00:06:48.519 00:06:48.519 real 0m2.955s 00:06:48.519 user 0m5.102s 00:06:48.519 sys 0m0.530s 00:06:48.519 20:14:00 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.519 20:14:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.519 ************************************ 00:06:48.519 END TEST spdkcli_tcp 00:06:48.519 ************************************ 00:06:48.519 20:14:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.519 20:14:00 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:48.519 20:14:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.519 20:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.519 20:14:00 -- common/autotest_common.sh@10 -- # set +x 00:06:48.519 ************************************ 00:06:48.519 START TEST dpdk_mem_utility 00:06:48.519 ************************************ 00:06:48.519 20:14:00 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:48.519 * Looking for test storage... 00:06:48.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:48.519 20:14:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:48.519 20:14:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3386983 00:06:48.519 20:14:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3386983 00:06:48.519 20:14:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:48.519 20:14:00 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3386983 ']' 00:06:48.519 20:14:00 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.519 20:14:00 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.519 20:14:00 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.519 20:14:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.519 20:14:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:48.519 [2024-07-22 20:14:00.448997] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:48.519 [2024-07-22 20:14:00.449140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386983 ] 00:06:48.519 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.781 [2024-07-22 20:14:00.575152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.781 [2024-07-22 20:14:00.755656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.351 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.351 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:49.351 20:14:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:49.351 20:14:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:49.351 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.351 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.351 { 00:06:49.351 "filename": "/tmp/spdk_mem_dump.txt" 00:06:49.351 } 00:06:49.351 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.351 20:14:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:49.612 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:49.612 1 heaps totaling size 820.000000 MiB 00:06:49.612 size: 820.000000 MiB heap id: 0 00:06:49.612 end heaps---------- 00:06:49.612 8 mempools totaling size 598.116089 MiB 00:06:49.612 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:49.612 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:49.612 size: 84.521057 MiB name: bdev_io_3386983 00:06:49.612 size: 51.011292 MiB name: evtpool_3386983 00:06:49.612 size: 50.003479 MiB name: msgpool_3386983 00:06:49.613 size: 21.763794 MiB name: PDU_Pool 00:06:49.613 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:49.613 size: 0.026123 MiB name: Session_Pool 00:06:49.613 end mempools------- 00:06:49.613 6 memzones totaling size 4.142822 MiB 00:06:49.613 size: 1.000366 MiB name: RG_ring_0_3386983 00:06:49.613 size: 1.000366 MiB name: RG_ring_1_3386983 00:06:49.613 size: 1.000366 MiB name: RG_ring_4_3386983 00:06:49.613 size: 1.000366 MiB name: RG_ring_5_3386983 00:06:49.613 size: 0.125366 MiB name: RG_ring_2_3386983 00:06:49.613 size: 0.015991 MiB name: RG_ring_3_3386983 00:06:49.613 end memzones------- 00:06:49.613 20:14:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:49.613 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:06:49.613 list of free elements. size: 18.514832 MiB 00:06:49.613 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:49.613 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:49.613 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:49.613 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:49.613 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:49.613 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:49.613 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:49.613 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:49.613 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:49.613 element at address: 0x200018e00000 with size: 0.959900 MiB 00:06:49.613 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:49.613 element at address: 0x200000200000 with size: 0.840942 MiB 00:06:49.613 element at address: 0x20001b000000 with size: 0.583191 MiB 00:06:49.613 element at address: 0x200019200000 with size: 0.491150 MiB 00:06:49.613 element at address: 0x200019a00000 with size: 0.485657 MiB 00:06:49.613 element at address: 0x200013800000 with size: 0.470581 MiB 00:06:49.613 element at address: 0x200028400000 with size: 0.411072 MiB 00:06:49.613 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:49.613 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:06:49.613 list of standard malloc elements. size: 199.220764 MiB 00:06:49.613 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:49.613 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:49.613 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:49.613 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:49.613 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:49.613 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:49.613 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:49.613 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:49.613 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:06:49.613 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:06:49.613 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:49.613 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:49.613 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:49.613 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:49.613 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:06:49.613 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:06:49.613 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:06:49.613 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:06:49.613 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:06:49.613 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:06:49.613 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:49.613 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:49.613 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:49.613 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:49.613 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:49.613 list of memzone associated elements. size: 602.264404 MiB 00:06:49.613 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:49.613 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:49.613 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:49.613 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:49.613 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:49.613 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3386983_0 00:06:49.613 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:49.613 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3386983_0 00:06:49.613 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:49.613 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3386983_0 00:06:49.613 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:49.613 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:49.613 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:49.613 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:49.613 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:49.613 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3386983 00:06:49.613 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:49.613 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3386983 00:06:49.613 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:49.613 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3386983 00:06:49.613 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:49.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:49.613 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:49.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:49.613 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:49.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:49.613 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:49.613 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:49.613 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:49.613 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3386983 00:06:49.613 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:49.613 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3386983 00:06:49.613 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:49.613 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3386983 00:06:49.613 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:49.613 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3386983 00:06:49.613 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:49.613 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3386983 00:06:49.613 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:06:49.613 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:49.613 element at address: 0x200013878780 with size: 0.500549 MiB 00:06:49.613 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:49.613 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:06:49.613 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:49.613 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:49.613 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3386983 00:06:49.613 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:06:49.613 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:49.613 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:06:49.613 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:49.613 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:49.613 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3386983 00:06:49.613 element at address: 0x20002846f540 with size: 0.002502 MiB 00:06:49.613 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:49.613 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:06:49.613 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3386983 00:06:49.613 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:49.613 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3386983 00:06:49.613 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:06:49.613 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:49.613 20:14:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:49.613 20:14:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3386983 00:06:49.613 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3386983 ']' 00:06:49.613 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3386983 00:06:49.613 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:49.613 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.613 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3386983 00:06:49.613 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.613 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.613 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3386983' 00:06:49.614 killing process with pid 3386983 00:06:49.614 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3386983 00:06:49.614 20:14:01 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3386983 00:06:51.525 00:06:51.525 real 0m2.864s 00:06:51.525 user 0m2.822s 00:06:51.525 sys 0m0.504s 00:06:51.525 20:14:03 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.525 20:14:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:51.525 ************************************ 00:06:51.525 END TEST dpdk_mem_utility 00:06:51.525 ************************************ 00:06:51.525 20:14:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.525 20:14:03 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:51.525 20:14:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.525 20:14:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.525 20:14:03 -- common/autotest_common.sh@10 -- # set +x 00:06:51.525 ************************************ 00:06:51.525 START TEST event 00:06:51.525 ************************************ 00:06:51.525 20:14:03 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:51.525 * Looking for test storage... 00:06:51.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:51.525 20:14:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:51.525 20:14:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:51.525 20:14:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.525 20:14:03 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:51.525 20:14:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.525 20:14:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.525 ************************************ 00:06:51.525 START TEST event_perf 00:06:51.525 ************************************ 00:06:51.525 20:14:03 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.525 Running I/O for 1 seconds...[2024-07-22 20:14:03.360970] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:51.525 [2024-07-22 20:14:03.361072] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387672 ] 00:06:51.525 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.525 [2024-07-22 20:14:03.478570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.786 [2024-07-22 20:14:03.661646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.786 [2024-07-22 20:14:03.661728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.786 [2024-07-22 20:14:03.661847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.786 Running I/O for 1 seconds...[2024-07-22 20:14:03.661871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.175 00:06:53.175 lcore 0: 184651 00:06:53.175 lcore 1: 184650 00:06:53.175 lcore 2: 184648 00:06:53.175 lcore 3: 184651 00:06:53.175 done. 00:06:53.175 00:06:53.175 real 0m1.630s 00:06:53.175 user 0m4.489s 00:06:53.175 sys 0m0.134s 00:06:53.175 20:14:04 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.175 20:14:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.175 ************************************ 00:06:53.175 END TEST event_perf 00:06:53.175 ************************************ 00:06:53.175 20:14:04 event -- common/autotest_common.sh@1142 -- # return 0 00:06:53.175 20:14:04 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:53.175 20:14:04 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:53.175 20:14:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.175 20:14:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.175 ************************************ 00:06:53.175 START TEST event_reactor 00:06:53.175 ************************************ 00:06:53.175 20:14:05 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:53.175 [2024-07-22 20:14:05.076024] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:53.176 [2024-07-22 20:14:05.076135] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388037 ] 00:06:53.176 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.176 [2024-07-22 20:14:05.194527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.436 [2024-07-22 20:14:05.374284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.820 test_start 00:06:54.820 oneshot 00:06:54.820 tick 100 00:06:54.820 tick 100 00:06:54.820 tick 250 00:06:54.820 tick 100 00:06:54.820 tick 100 00:06:54.820 tick 100 00:06:54.820 tick 250 00:06:54.820 tick 500 00:06:54.820 tick 100 00:06:54.820 tick 100 00:06:54.820 tick 250 00:06:54.820 tick 100 00:06:54.820 tick 100 00:06:54.820 test_end 00:06:54.820 00:06:54.820 real 0m1.630s 00:06:54.820 user 0m1.484s 00:06:54.820 sys 0m0.139s 00:06:54.820 20:14:06 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.820 20:14:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:54.820 ************************************ 00:06:54.820 END TEST event_reactor 00:06:54.820 ************************************ 00:06:54.821 20:14:06 event -- common/autotest_common.sh@1142 -- # return 0 00:06:54.821 20:14:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.821 20:14:06 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:54.821 20:14:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.821 20:14:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.821 ************************************ 00:06:54.821 START TEST event_reactor_perf 00:06:54.821 ************************************ 00:06:54.821 20:14:06 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.821 [2024-07-22 20:14:06.778120] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:54.821 [2024-07-22 20:14:06.778233] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388392 ] 00:06:55.081 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.081 [2024-07-22 20:14:06.898426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.081 [2024-07-22 20:14:07.078923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.466 test_start 00:06:56.466 test_end 00:06:56.466 Performance: 296461 events per second 00:06:56.466 00:06:56.466 real 0m1.627s 00:06:56.466 user 0m1.474s 00:06:56.466 sys 0m0.146s 00:06:56.466 20:14:08 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.466 20:14:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.466 ************************************ 00:06:56.466 END TEST event_reactor_perf 00:06:56.466 ************************************ 00:06:56.466 20:14:08 event -- common/autotest_common.sh@1142 -- # return 0 00:06:56.466 20:14:08 event -- event/event.sh@49 -- # uname -s 00:06:56.466 20:14:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:56.466 20:14:08 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:56.466 20:14:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.466 20:14:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.466 20:14:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.466 ************************************ 00:06:56.466 START TEST event_scheduler 00:06:56.466 ************************************ 00:06:56.466 20:14:08 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:56.728 * Looking for test storage... 00:06:56.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:56.728 20:14:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:56.728 20:14:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3388783 00:06:56.728 20:14:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.728 20:14:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:56.728 20:14:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3388783 00:06:56.728 20:14:08 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3388783 ']' 00:06:56.728 20:14:08 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.728 20:14:08 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.728 20:14:08 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.728 20:14:08 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.728 20:14:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.728 [2024-07-22 20:14:08.615803] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:56.728 [2024-07-22 20:14:08.615924] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388783 ] 00:06:56.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.728 [2024-07-22 20:14:08.733028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.989 [2024-07-22 20:14:08.878599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.989 [2024-07-22 20:14:08.878739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.989 [2024-07-22 20:14:08.878832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.989 [2024-07-22 20:14:08.878856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.561 20:14:09 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.561 20:14:09 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:57.561 20:14:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:57.562 20:14:09 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.562 20:14:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.562 [2024-07-22 20:14:09.372710] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:57.562 [2024-07-22 20:14:09.372734] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:57.562 [2024-07-22 20:14:09.372752] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:57.562 [2024-07-22 20:14:09.372761] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:57.562 [2024-07-22 20:14:09.372769] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:57.562 20:14:09 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.562 20:14:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:57.562 20:14:09 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.562 20:14:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.562 [2024-07-22 20:14:09.546362] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:57.562 20:14:09 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.562 20:14:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:57.562 20:14:09 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.562 20:14:09 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.562 20:14:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.823 ************************************ 00:06:57.823 START TEST scheduler_create_thread 00:06:57.823 ************************************ 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.823 2 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.823 3 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.823 4 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.823 5 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.823 6 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.823 7 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.823 8 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.823 9 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.823 10 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.823 20:14:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.209 20:14:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.209 20:14:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:59.209 20:14:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:59.209 20:14:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.209 20:14:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.152 20:14:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.152 20:14:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:00.152 20:14:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.152 20:14:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.724 20:14:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.724 20:14:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:00.724 20:14:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:00.724 20:14:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.724 20:14:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.666 20:14:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.666 00:07:01.666 real 0m3.893s 00:07:01.666 user 0m0.023s 00:07:01.666 sys 0m0.008s 00:07:01.667 20:14:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.667 20:14:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.667 ************************************ 00:07:01.667 END TEST scheduler_create_thread 00:07:01.667 ************************************ 00:07:01.667 20:14:13 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:07:01.667 20:14:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:01.667 20:14:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3388783 00:07:01.667 20:14:13 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3388783 ']' 00:07:01.667 20:14:13 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3388783 00:07:01.667 20:14:13 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:07:01.667 20:14:13 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.667 20:14:13 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3388783 00:07:01.667 20:14:13 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:01.667 20:14:13 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:01.667 20:14:13 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3388783' 00:07:01.667 killing process with pid 3388783 00:07:01.667 20:14:13 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3388783 00:07:01.667 20:14:13 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3388783 00:07:01.927 [2024-07-22 20:14:13.858798] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:02.498 00:07:02.498 real 0m6.052s 00:07:02.498 user 0m12.429s 00:07:02.498 sys 0m0.466s 00:07:02.498 20:14:14 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.498 20:14:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:02.498 ************************************ 00:07:02.498 END TEST event_scheduler 00:07:02.498 ************************************ 00:07:02.760 20:14:14 event -- common/autotest_common.sh@1142 -- # return 0 00:07:02.760 20:14:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:02.760 20:14:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:02.760 20:14:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.760 20:14:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.760 20:14:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.760 ************************************ 00:07:02.760 START TEST app_repeat 00:07:02.760 ************************************ 00:07:02.760 20:14:14 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3390159 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3390159' 00:07:02.760 Process app_repeat pid: 3390159 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:02.760 spdk_app_start Round 0 00:07:02.760 20:14:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3390159 /var/tmp/spdk-nbd.sock 00:07:02.760 20:14:14 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3390159 ']' 00:07:02.760 20:14:14 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.760 20:14:14 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.760 20:14:14 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.760 20:14:14 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.760 20:14:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.760 [2024-07-22 20:14:14.644822] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:02.760 [2024-07-22 20:14:14.644939] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390159 ] 00:07:02.760 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.760 [2024-07-22 20:14:14.769120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.021 [2024-07-22 20:14:14.951209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.021 [2024-07-22 20:14:14.951244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.620 20:14:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.620 20:14:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:03.620 20:14:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.620 Malloc0 00:07:03.884 20:14:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.884 Malloc1 00:07:03.884 20:14:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.884 20:14:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.146 /dev/nbd0 00:07:04.146 20:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.146 20:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.146 1+0 records in 00:07:04.146 1+0 records out 00:07:04.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266495 s, 15.4 MB/s 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.146 20:14:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:04.146 20:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.146 20:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.146 20:14:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.408 /dev/nbd1 00:07:04.408 20:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.408 20:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.408 1+0 records in 00:07:04.408 1+0 records out 00:07:04.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313968 s, 13.0 MB/s 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.408 20:14:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:04.408 20:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.408 20:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.408 20:14:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.408 20:14:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.408 20:14:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.408 20:14:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.408 { 00:07:04.408 "nbd_device": "/dev/nbd0", 00:07:04.408 "bdev_name": "Malloc0" 00:07:04.408 }, 00:07:04.408 { 00:07:04.408 "nbd_device": "/dev/nbd1", 00:07:04.408 "bdev_name": "Malloc1" 00:07:04.408 } 00:07:04.408 ]' 00:07:04.408 20:14:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.408 { 00:07:04.408 "nbd_device": "/dev/nbd0", 00:07:04.408 "bdev_name": "Malloc0" 00:07:04.408 }, 00:07:04.408 { 00:07:04.408 "nbd_device": "/dev/nbd1", 00:07:04.408 "bdev_name": "Malloc1" 00:07:04.408 } 00:07:04.408 ]' 00:07:04.408 20:14:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.669 /dev/nbd1' 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.669 /dev/nbd1' 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.669 256+0 records in 00:07:04.669 256+0 records out 00:07:04.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012452 s, 84.2 MB/s 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.669 256+0 records in 00:07:04.669 256+0 records out 00:07:04.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019526 s, 53.7 MB/s 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.669 256+0 records in 00:07:04.669 256+0 records out 00:07:04.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185176 s, 56.6 MB/s 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.669 20:14:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.670 20:14:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.670 20:14:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.670 20:14:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.670 20:14:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.670 20:14:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.670 20:14:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.670 20:14:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.670 20:14:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.931 20:14:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.192 20:14:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.192 20:14:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.453 20:14:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:06.396 [2024-07-22 20:14:18.282659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.658 [2024-07-22 20:14:18.452364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.658 [2024-07-22 20:14:18.452514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.658 [2024-07-22 20:14:18.590608] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:06.658 [2024-07-22 20:14:18.590656] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:08.574 20:14:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:08.574 20:14:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:08.574 spdk_app_start Round 1 00:07:08.574 20:14:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3390159 /var/tmp/spdk-nbd.sock 00:07:08.574 20:14:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3390159 ']' 00:07:08.574 20:14:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.574 20:14:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.574 20:14:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.574 20:14:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.574 20:14:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.574 20:14:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.574 20:14:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:08.574 20:14:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.836 Malloc0 00:07:08.836 20:14:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:09.097 Malloc1 00:07:09.097 20:14:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.097 20:14:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:09.097 /dev/nbd0 00:07:09.097 20:14:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.097 20:14:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.097 20:14:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.098 1+0 records in 00:07:09.098 1+0 records out 00:07:09.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198727 s, 20.6 MB/s 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:09.098 20:14:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:09.098 20:14:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.098 20:14:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.098 20:14:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:09.359 /dev/nbd1 00:07:09.359 20:14:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:09.359 20:14:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.359 1+0 records in 00:07:09.359 1+0 records out 00:07:09.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030065 s, 13.6 MB/s 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:09.359 20:14:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:09.359 20:14:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.359 20:14:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.359 20:14:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.359 20:14:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.359 20:14:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.620 { 00:07:09.620 "nbd_device": "/dev/nbd0", 00:07:09.620 "bdev_name": "Malloc0" 00:07:09.620 }, 00:07:09.620 { 00:07:09.620 "nbd_device": "/dev/nbd1", 00:07:09.620 "bdev_name": "Malloc1" 00:07:09.620 } 00:07:09.620 ]' 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.620 { 00:07:09.620 "nbd_device": "/dev/nbd0", 00:07:09.620 "bdev_name": "Malloc0" 00:07:09.620 }, 00:07:09.620 { 00:07:09.620 "nbd_device": "/dev/nbd1", 00:07:09.620 "bdev_name": "Malloc1" 00:07:09.620 } 00:07:09.620 ]' 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.620 /dev/nbd1' 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.620 /dev/nbd1' 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:09.620 256+0 records in 00:07:09.620 256+0 records out 00:07:09.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124415 s, 84.3 MB/s 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.620 256+0 records in 00:07:09.620 256+0 records out 00:07:09.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149733 s, 70.0 MB/s 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.620 256+0 records in 00:07:09.620 256+0 records out 00:07:09.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211478 s, 49.6 MB/s 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:09.620 20:14:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.621 20:14:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.881 20:14:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.881 20:14:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.881 20:14:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.882 20:14:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.882 20:14:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.882 20:14:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.882 20:14:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.882 20:14:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.882 20:14:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.882 20:14:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.143 20:14:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.143 20:14:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.143 20:14:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.143 20:14:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.143 20:14:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.143 20:14:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.143 20:14:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.143 20:14:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.143 20:14:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.143 20:14:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.143 20:14:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:10.143 20:14:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:10.143 20:14:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:10.403 20:14:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:11.346 [2024-07-22 20:14:23.332064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.607 [2024-07-22 20:14:23.502781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.607 [2024-07-22 20:14:23.502799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.868 [2024-07-22 20:14:23.640917] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:11.868 [2024-07-22 20:14:23.640964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:13.781 20:14:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:13.781 20:14:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:13.781 spdk_app_start Round 2 00:07:13.781 20:14:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3390159 /var/tmp/spdk-nbd.sock 00:07:13.782 20:14:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3390159 ']' 00:07:13.782 20:14:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:13.782 20:14:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.782 20:14:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:13.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:13.782 20:14:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.782 20:14:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:13.782 20:14:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.782 20:14:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:13.782 20:14:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:13.782 Malloc0 00:07:13.782 20:14:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.043 Malloc1 00:07:14.043 20:14:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.043 20:14:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:14.304 /dev/nbd0 00:07:14.304 20:14:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:14.304 20:14:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:14.304 1+0 records in 00:07:14.304 1+0 records out 00:07:14.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205311 s, 20.0 MB/s 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:14.304 20:14:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.304 20:14:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.304 20:14:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:14.304 /dev/nbd1 00:07:14.304 20:14:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:14.304 20:14:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:14.304 1+0 records in 00:07:14.304 1+0 records out 00:07:14.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257238 s, 15.9 MB/s 00:07:14.304 20:14:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:14.565 20:14:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:14.565 20:14:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:14.565 20:14:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:14.565 20:14:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:14.565 { 00:07:14.565 "nbd_device": "/dev/nbd0", 00:07:14.565 "bdev_name": "Malloc0" 00:07:14.565 }, 00:07:14.565 { 00:07:14.565 "nbd_device": "/dev/nbd1", 00:07:14.565 "bdev_name": "Malloc1" 00:07:14.565 } 00:07:14.565 ]' 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:14.565 { 00:07:14.565 "nbd_device": "/dev/nbd0", 00:07:14.565 "bdev_name": "Malloc0" 00:07:14.565 }, 00:07:14.565 { 00:07:14.565 "nbd_device": "/dev/nbd1", 00:07:14.565 "bdev_name": "Malloc1" 00:07:14.565 } 00:07:14.565 ]' 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:14.565 /dev/nbd1' 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:14.565 /dev/nbd1' 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:14.565 256+0 records in 00:07:14.565 256+0 records out 00:07:14.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116687 s, 89.9 MB/s 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.565 20:14:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:14.826 256+0 records in 00:07:14.826 256+0 records out 00:07:14.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186707 s, 56.2 MB/s 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:14.826 256+0 records in 00:07:14.826 256+0 records out 00:07:14.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182773 s, 57.4 MB/s 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.826 20:14:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:15.087 20:14:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:15.087 20:14:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:15.087 20:14:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:15.087 20:14:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.087 20:14:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.087 20:14:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:15.087 20:14:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:15.087 20:14:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.087 20:14:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.087 20:14:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.087 20:14:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.347 20:14:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.347 20:14:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:15.608 20:14:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:16.549 [2024-07-22 20:14:28.402634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.810 [2024-07-22 20:14:28.572711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.810 [2024-07-22 20:14:28.572713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.810 [2024-07-22 20:14:28.710860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:16.810 [2024-07-22 20:14:28.710905] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:18.722 20:14:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3390159 /var/tmp/spdk-nbd.sock 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3390159 ']' 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:18.722 20:14:30 event.app_repeat -- event/event.sh@39 -- # killprocess 3390159 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3390159 ']' 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3390159 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3390159 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3390159' 00:07:18.722 killing process with pid 3390159 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3390159 00:07:18.722 20:14:30 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3390159 00:07:19.663 spdk_app_start is called in Round 0. 00:07:19.663 Shutdown signal received, stop current app iteration 00:07:19.663 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:19.663 spdk_app_start is called in Round 1. 00:07:19.663 Shutdown signal received, stop current app iteration 00:07:19.663 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:19.663 spdk_app_start is called in Round 2. 00:07:19.663 Shutdown signal received, stop current app iteration 00:07:19.663 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:19.663 spdk_app_start is called in Round 3. 00:07:19.663 Shutdown signal received, stop current app iteration 00:07:19.663 20:14:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:19.663 20:14:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:19.663 00:07:19.663 real 0m16.928s 00:07:19.663 user 0m34.691s 00:07:19.663 sys 0m2.254s 00:07:19.663 20:14:31 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.663 20:14:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.663 ************************************ 00:07:19.663 END TEST app_repeat 00:07:19.663 ************************************ 00:07:19.663 20:14:31 event -- common/autotest_common.sh@1142 -- # return 0 00:07:19.663 20:14:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:19.663 20:14:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:19.663 20:14:31 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.663 20:14:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.663 20:14:31 event -- common/autotest_common.sh@10 -- # set +x 00:07:19.663 ************************************ 00:07:19.663 START TEST cpu_locks 00:07:19.663 ************************************ 00:07:19.663 20:14:31 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:19.663 * Looking for test storage... 00:07:19.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:19.663 20:14:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:19.663 20:14:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:19.663 20:14:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:19.663 20:14:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:19.663 20:14:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.663 20:14:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.663 20:14:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.924 ************************************ 00:07:19.924 START TEST default_locks 00:07:19.924 ************************************ 00:07:19.924 20:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:19.924 20:14:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3393751 00:07:19.924 20:14:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3393751 00:07:19.924 20:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3393751 ']' 00:07:19.924 20:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.924 20:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.924 20:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.924 20:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.924 20:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.924 20:14:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.924 [2024-07-22 20:14:31.804618] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:19.924 [2024-07-22 20:14:31.804745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393751 ] 00:07:19.924 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.924 [2024-07-22 20:14:31.926607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.185 [2024-07-22 20:14:32.106026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.756 20:14:32 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.756 20:14:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:20.756 20:14:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3393751 00:07:20.756 20:14:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.756 20:14:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3393751 00:07:21.367 lslocks: write error 00:07:21.367 20:14:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3393751 00:07:21.367 20:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3393751 ']' 00:07:21.367 20:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3393751 00:07:21.367 20:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:21.367 20:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.367 20:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3393751 00:07:21.367 20:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.367 20:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.367 20:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3393751' 00:07:21.367 killing process with pid 3393751 00:07:21.367 20:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3393751 00:07:21.367 20:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3393751 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3393751 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3393751 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3393751 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3393751 ']' 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3393751) - No such process 00:07:23.298 ERROR: process (pid: 3393751) is no longer running 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:23.298 00:07:23.298 real 0m3.133s 00:07:23.298 user 0m3.100s 00:07:23.298 sys 0m0.653s 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.298 20:14:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.298 ************************************ 00:07:23.298 END TEST default_locks 00:07:23.298 ************************************ 00:07:23.298 20:14:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:23.298 20:14:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:23.298 20:14:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.298 20:14:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.298 20:14:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.299 ************************************ 00:07:23.299 START TEST default_locks_via_rpc 00:07:23.299 ************************************ 00:07:23.299 20:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:23.299 20:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3394452 00:07:23.299 20:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3394452 00:07:23.299 20:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.299 20:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3394452 ']' 00:07:23.299 20:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.299 20:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.299 20:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.299 20:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.299 20:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.299 [2024-07-22 20:14:35.005135] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:23.299 [2024-07-22 20:14:35.005251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394452 ] 00:07:23.299 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.299 [2024-07-22 20:14:35.116560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.299 [2024-07-22 20:14:35.293480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3394452 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3394452 00:07:23.871 20:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.443 20:14:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3394452 00:07:24.443 20:14:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3394452 ']' 00:07:24.443 20:14:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3394452 00:07:24.443 20:14:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:24.443 20:14:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.443 20:14:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3394452 00:07:24.443 20:14:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:24.443 20:14:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:24.443 20:14:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3394452' 00:07:24.443 killing process with pid 3394452 00:07:24.443 20:14:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3394452 00:07:24.443 20:14:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3394452 00:07:26.355 00:07:26.355 real 0m3.099s 00:07:26.355 user 0m3.078s 00:07:26.355 sys 0m0.626s 00:07:26.355 20:14:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.355 20:14:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.355 ************************************ 00:07:26.355 END TEST default_locks_via_rpc 00:07:26.355 ************************************ 00:07:26.355 20:14:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:26.355 20:14:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:26.355 20:14:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.355 20:14:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.355 20:14:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.355 ************************************ 00:07:26.355 START TEST non_locking_app_on_locked_coremask 00:07:26.355 ************************************ 00:07:26.355 20:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:26.355 20:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3395154 00:07:26.355 20:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3395154 /var/tmp/spdk.sock 00:07:26.355 20:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3395154 ']' 00:07:26.355 20:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.355 20:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.355 20:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.355 20:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.355 20:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.355 20:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.355 [2024-07-22 20:14:38.186186] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:26.355 [2024-07-22 20:14:38.186317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3395154 ] 00:07:26.355 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.355 [2024-07-22 20:14:38.310959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.616 [2024-07-22 20:14:38.489309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.187 20:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.187 20:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:27.187 20:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3395210 00:07:27.187 20:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3395210 /var/tmp/spdk2.sock 00:07:27.187 20:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3395210 ']' 00:07:27.187 20:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.187 20:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.187 20:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.187 20:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.187 20:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.187 20:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:27.187 [2024-07-22 20:14:39.145920] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:27.187 [2024-07-22 20:14:39.146034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3395210 ] 00:07:27.187 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.447 [2024-07-22 20:14:39.302605] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.447 [2024-07-22 20:14:39.302648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.706 [2024-07-22 20:14:39.653786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.089 20:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.089 20:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:29.089 20:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3395154 00:07:29.089 20:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3395154 00:07:29.089 20:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.349 lslocks: write error 00:07:29.349 20:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3395154 00:07:29.349 20:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3395154 ']' 00:07:29.349 20:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3395154 00:07:29.349 20:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:29.349 20:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.349 20:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3395154 00:07:29.349 20:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.349 20:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.349 20:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3395154' 00:07:29.349 killing process with pid 3395154 00:07:29.349 20:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3395154 00:07:29.349 20:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3395154 00:07:32.645 20:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3395210 00:07:32.645 20:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3395210 ']' 00:07:32.645 20:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3395210 00:07:32.645 20:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:32.645 20:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.645 20:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3395210 00:07:32.645 20:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.645 20:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.645 20:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3395210' 00:07:32.645 killing process with pid 3395210 00:07:32.645 20:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3395210 00:07:32.645 20:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3395210 00:07:34.557 00:07:34.557 real 0m8.171s 00:07:34.557 user 0m8.260s 00:07:34.557 sys 0m1.101s 00:07:34.557 20:14:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.557 20:14:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.557 ************************************ 00:07:34.557 END TEST non_locking_app_on_locked_coremask 00:07:34.557 ************************************ 00:07:34.557 20:14:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:34.557 20:14:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:34.557 20:14:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.557 20:14:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.557 20:14:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.557 ************************************ 00:07:34.557 START TEST locking_app_on_unlocked_coremask 00:07:34.557 ************************************ 00:07:34.557 20:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:34.557 20:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3396872 00:07:34.557 20:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3396872 /var/tmp/spdk.sock 00:07:34.557 20:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:34.557 20:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3396872 ']' 00:07:34.557 20:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.557 20:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.557 20:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.557 20:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.557 20:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.557 [2024-07-22 20:14:46.426439] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:34.557 [2024-07-22 20:14:46.426546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396872 ] 00:07:34.557 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.557 [2024-07-22 20:14:46.535912] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:34.557 [2024-07-22 20:14:46.535952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.818 [2024-07-22 20:14:46.711169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.389 20:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.389 20:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:35.389 20:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3396889 00:07:35.389 20:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:35.389 20:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3396889 /var/tmp/spdk2.sock 00:07:35.389 20:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3396889 ']' 00:07:35.389 20:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.389 20:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.389 20:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.389 20:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.389 20:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.389 [2024-07-22 20:14:47.350741] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:35.389 [2024-07-22 20:14:47.350851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396889 ] 00:07:35.389 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.650 [2024-07-22 20:14:47.507149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.910 [2024-07-22 20:14:47.864067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.294 20:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.294 20:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:37.294 20:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3396889 00:07:37.294 20:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3396889 00:07:37.294 20:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.555 lslocks: write error 00:07:37.555 20:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3396872 00:07:37.555 20:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3396872 ']' 00:07:37.555 20:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3396872 00:07:37.555 20:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:37.555 20:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.555 20:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3396872 00:07:37.555 20:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.555 20:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.555 20:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3396872' 00:07:37.555 killing process with pid 3396872 00:07:37.555 20:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3396872 00:07:37.555 20:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3396872 00:07:40.856 20:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3396889 00:07:40.856 20:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3396889 ']' 00:07:40.856 20:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3396889 00:07:40.856 20:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:40.856 20:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.856 20:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3396889 00:07:40.856 20:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:40.856 20:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:40.856 20:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3396889' 00:07:40.856 killing process with pid 3396889 00:07:40.856 20:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3396889 00:07:40.856 20:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3396889 00:07:42.770 00:07:42.770 real 0m8.118s 00:07:42.770 user 0m8.192s 00:07:42.770 sys 0m1.101s 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.770 ************************************ 00:07:42.770 END TEST locking_app_on_unlocked_coremask 00:07:42.770 ************************************ 00:07:42.770 20:14:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:42.770 20:14:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:42.770 20:14:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.770 20:14:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.770 20:14:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.770 ************************************ 00:07:42.770 START TEST locking_app_on_locked_coremask 00:07:42.770 ************************************ 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3398441 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3398441 /var/tmp/spdk.sock 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3398441 ']' 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.770 20:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.770 [2024-07-22 20:14:54.608493] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:42.770 [2024-07-22 20:14:54.608606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398441 ] 00:07:42.770 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.770 [2024-07-22 20:14:54.718989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.030 [2024-07-22 20:14:54.898751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3398601 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3398601 /var/tmp/spdk2.sock 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3398601 /var/tmp/spdk2.sock 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3398601 /var/tmp/spdk2.sock 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3398601 ']' 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:43.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.601 20:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.601 [2024-07-22 20:14:55.561217] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:43.601 [2024-07-22 20:14:55.561330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398601 ] 00:07:43.601 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.863 [2024-07-22 20:14:55.724332] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3398441 has claimed it. 00:07:43.863 [2024-07-22 20:14:55.724390] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:44.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3398601) - No such process 00:07:44.124 ERROR: process (pid: 3398601) is no longer running 00:07:44.124 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.124 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:44.124 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:44.124 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.124 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.124 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.124 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3398441 00:07:44.124 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3398441 00:07:44.124 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:44.762 lslocks: write error 00:07:44.762 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3398441 00:07:44.762 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3398441 ']' 00:07:44.762 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3398441 00:07:44.762 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:44.762 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.762 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3398441 00:07:44.762 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.762 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.762 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3398441' 00:07:44.762 killing process with pid 3398441 00:07:44.762 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3398441 00:07:44.762 20:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3398441 00:07:46.676 00:07:46.676 real 0m3.774s 00:07:46.676 user 0m3.884s 00:07:46.676 sys 0m0.804s 00:07:46.676 20:14:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.676 20:14:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.676 ************************************ 00:07:46.676 END TEST locking_app_on_locked_coremask 00:07:46.676 ************************************ 00:07:46.676 20:14:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:46.676 20:14:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:46.676 20:14:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.676 20:14:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.676 20:14:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.676 ************************************ 00:07:46.676 START TEST locking_overlapped_coremask 00:07:46.676 ************************************ 00:07:46.676 20:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:46.676 20:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3399295 00:07:46.676 20:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3399295 /var/tmp/spdk.sock 00:07:46.676 20:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:46.676 20:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3399295 ']' 00:07:46.676 20:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.676 20:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.676 20:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.676 20:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.676 20:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.676 [2024-07-22 20:14:58.452605] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:46.676 [2024-07-22 20:14:58.452715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399295 ] 00:07:46.676 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.676 [2024-07-22 20:14:58.564562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.937 [2024-07-22 20:14:58.742969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.937 [2024-07-22 20:14:58.743049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.937 [2024-07-22 20:14:58.743051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3399376 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3399376 /var/tmp/spdk2.sock 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3399376 /var/tmp/spdk2.sock 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3399376 /var/tmp/spdk2.sock 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3399376 ']' 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.509 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.509 [2024-07-22 20:14:59.405690] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:47.509 [2024-07-22 20:14:59.405807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399376 ] 00:07:47.509 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.771 [2024-07-22 20:14:59.541266] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3399295 has claimed it. 00:07:47.771 [2024-07-22 20:14:59.541312] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:48.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3399376) - No such process 00:07:48.032 ERROR: process (pid: 3399376) is no longer running 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3399295 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3399295 ']' 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3399295 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.032 20:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3399295 00:07:48.032 20:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:48.032 20:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:48.032 20:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3399295' 00:07:48.032 killing process with pid 3399295 00:07:48.032 20:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3399295 00:07:48.032 20:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3399295 00:07:49.981 00:07:49.981 real 0m3.309s 00:07:49.981 user 0m8.610s 00:07:49.981 sys 0m0.566s 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.981 ************************************ 00:07:49.981 END TEST locking_overlapped_coremask 00:07:49.981 ************************************ 00:07:49.981 20:15:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:49.981 20:15:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:49.981 20:15:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.981 20:15:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.981 20:15:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.981 ************************************ 00:07:49.981 START TEST locking_overlapped_coremask_via_rpc 00:07:49.981 ************************************ 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3400111 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3400111 /var/tmp/spdk.sock 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3400111 ']' 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.981 20:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.981 [2024-07-22 20:15:01.846770] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:49.981 [2024-07-22 20:15:01.846911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400111 ] 00:07:49.981 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.981 [2024-07-22 20:15:01.971307] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:49.981 [2024-07-22 20:15:01.971357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.241 [2024-07-22 20:15:02.153334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.241 [2024-07-22 20:15:02.153499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.241 [2024-07-22 20:15:02.153499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.812 20:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.812 20:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:50.812 20:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3400211 00:07:50.812 20:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3400211 /var/tmp/spdk2.sock 00:07:50.812 20:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:50.812 20:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3400211 ']' 00:07:50.812 20:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.812 20:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.812 20:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.812 20:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.812 20:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.812 [2024-07-22 20:15:02.817540] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:50.812 [2024-07-22 20:15:02.817641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400211 ] 00:07:51.073 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.073 [2024-07-22 20:15:02.952240] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:51.073 [2024-07-22 20:15:02.952276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.333 [2024-07-22 20:15:03.224839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.333 [2024-07-22 20:15:03.224938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.333 [2024-07-22 20:15:03.224962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.275 [2024-07-22 20:15:04.086304] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3400111 has claimed it. 00:07:52.275 request: 00:07:52.275 { 00:07:52.275 "method": "framework_enable_cpumask_locks", 00:07:52.275 "req_id": 1 00:07:52.275 } 00:07:52.275 Got JSON-RPC error response 00:07:52.275 response: 00:07:52.275 { 00:07:52.275 "code": -32603, 00:07:52.275 "message": "Failed to claim CPU core: 2" 00:07:52.275 } 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:52.275 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3400111 /var/tmp/spdk.sock 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3400111 ']' 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3400211 /var/tmp/spdk2.sock 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3400211 ']' 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.276 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.537 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.537 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:52.537 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:52.537 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:52.537 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:52.537 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:52.537 00:07:52.537 real 0m2.686s 00:07:52.537 user 0m0.852s 00:07:52.537 sys 0m0.154s 00:07:52.537 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.537 20:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.537 ************************************ 00:07:52.537 END TEST locking_overlapped_coremask_via_rpc 00:07:52.537 ************************************ 00:07:52.537 20:15:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:52.537 20:15:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:52.537 20:15:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3400111 ]] 00:07:52.537 20:15:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3400111 00:07:52.537 20:15:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3400111 ']' 00:07:52.537 20:15:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3400111 00:07:52.537 20:15:04 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:52.537 20:15:04 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.537 20:15:04 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3400111 00:07:52.537 20:15:04 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.537 20:15:04 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.537 20:15:04 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3400111' 00:07:52.537 killing process with pid 3400111 00:07:52.537 20:15:04 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3400111 00:07:52.537 20:15:04 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3400111 00:07:54.451 20:15:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3400211 ]] 00:07:54.451 20:15:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3400211 00:07:54.451 20:15:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3400211 ']' 00:07:54.451 20:15:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3400211 00:07:54.451 20:15:06 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:54.451 20:15:06 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.451 20:15:06 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3400211 00:07:54.451 20:15:06 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:54.451 20:15:06 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:54.451 20:15:06 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3400211' 00:07:54.451 killing process with pid 3400211 00:07:54.451 20:15:06 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3400211 00:07:54.451 20:15:06 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3400211 00:07:55.392 20:15:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:55.392 20:15:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:55.392 20:15:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3400111 ]] 00:07:55.392 20:15:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3400111 00:07:55.392 20:15:07 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3400111 ']' 00:07:55.392 20:15:07 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3400111 00:07:55.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3400111) - No such process 00:07:55.392 20:15:07 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3400111 is not found' 00:07:55.392 Process with pid 3400111 is not found 00:07:55.392 20:15:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3400211 ]] 00:07:55.392 20:15:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3400211 00:07:55.393 20:15:07 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3400211 ']' 00:07:55.393 20:15:07 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3400211 00:07:55.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3400211) - No such process 00:07:55.393 20:15:07 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3400211 is not found' 00:07:55.393 Process with pid 3400211 is not found 00:07:55.393 20:15:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:55.393 00:07:55.393 real 0m35.818s 00:07:55.393 user 0m56.896s 00:07:55.393 sys 0m6.116s 00:07:55.393 20:15:07 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.393 20:15:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.393 ************************************ 00:07:55.393 END TEST cpu_locks 00:07:55.393 ************************************ 00:07:55.654 20:15:07 event -- common/autotest_common.sh@1142 -- # return 0 00:07:55.654 00:07:55.654 real 1m4.249s 00:07:55.654 user 1m51.687s 00:07:55.654 sys 0m9.628s 00:07:55.654 20:15:07 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.654 20:15:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.654 ************************************ 00:07:55.654 END TEST event 00:07:55.654 ************************************ 00:07:55.654 20:15:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:55.654 20:15:07 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:55.654 20:15:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.654 20:15:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.654 20:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:55.654 ************************************ 00:07:55.654 START TEST thread 00:07:55.654 ************************************ 00:07:55.654 20:15:07 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:55.654 * Looking for test storage... 00:07:55.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:55.654 20:15:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:55.654 20:15:07 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:55.654 20:15:07 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.654 20:15:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:55.654 ************************************ 00:07:55.654 START TEST thread_poller_perf 00:07:55.654 ************************************ 00:07:55.654 20:15:07 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:55.914 [2024-07-22 20:15:07.679918] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:55.914 [2024-07-22 20:15:07.680039] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401240 ] 00:07:55.914 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.914 [2024-07-22 20:15:07.808475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.174 [2024-07-22 20:15:07.992116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.174 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:57.559 ====================================== 00:07:57.559 busy:2410649888 (cyc) 00:07:57.559 total_run_count: 281000 00:07:57.559 tsc_hz: 2400000000 (cyc) 00:07:57.559 ====================================== 00:07:57.559 poller_cost: 8578 (cyc), 3574 (nsec) 00:07:57.559 00:07:57.559 real 0m1.649s 00:07:57.559 user 0m1.506s 00:07:57.559 sys 0m0.135s 00:07:57.559 20:15:09 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.559 20:15:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:57.559 ************************************ 00:07:57.559 END TEST thread_poller_perf 00:07:57.559 ************************************ 00:07:57.559 20:15:09 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:57.559 20:15:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:57.559 20:15:09 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:57.559 20:15:09 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.559 20:15:09 thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.559 ************************************ 00:07:57.559 START TEST thread_poller_perf 00:07:57.559 ************************************ 00:07:57.559 20:15:09 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:57.559 [2024-07-22 20:15:09.402845] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:57.559 [2024-07-22 20:15:09.402952] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401818 ] 00:07:57.559 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.559 [2024-07-22 20:15:09.518633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.820 [2024-07-22 20:15:09.695869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.820 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:59.204 ====================================== 00:07:59.204 busy:2403040160 (cyc) 00:07:59.204 total_run_count: 3671000 00:07:59.204 tsc_hz: 2400000000 (cyc) 00:07:59.204 ====================================== 00:07:59.204 poller_cost: 654 (cyc), 272 (nsec) 00:07:59.204 00:07:59.204 real 0m1.622s 00:07:59.204 user 0m1.481s 00:07:59.204 sys 0m0.133s 00:07:59.204 20:15:10 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.204 20:15:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.204 ************************************ 00:07:59.204 END TEST thread_poller_perf 00:07:59.204 ************************************ 00:07:59.204 20:15:11 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:59.204 20:15:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:59.204 00:07:59.204 real 0m3.519s 00:07:59.204 user 0m3.074s 00:07:59.204 sys 0m0.444s 00:07:59.204 20:15:11 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.204 20:15:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.204 ************************************ 00:07:59.204 END TEST thread 00:07:59.204 ************************************ 00:07:59.204 20:15:11 -- common/autotest_common.sh@1142 -- # return 0 00:07:59.204 20:15:11 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:59.205 20:15:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.205 20:15:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.205 20:15:11 -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 ************************************ 00:07:59.205 START TEST accel 00:07:59.205 ************************************ 00:07:59.205 20:15:11 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:59.205 * Looking for test storage... 00:07:59.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:59.205 20:15:11 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:59.205 20:15:11 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:59.205 20:15:11 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:59.205 20:15:11 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3402497 00:07:59.205 20:15:11 accel -- accel/accel.sh@63 -- # waitforlisten 3402497 00:07:59.205 20:15:11 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:59.205 20:15:11 accel -- common/autotest_common.sh@829 -- # '[' -z 3402497 ']' 00:07:59.205 20:15:11 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.205 20:15:11 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.205 20:15:11 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.205 20:15:11 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:59.205 20:15:11 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.205 20:15:11 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.205 20:15:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 20:15:11 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.205 20:15:11 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.205 20:15:11 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.205 20:15:11 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.205 20:15:11 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:59.205 20:15:11 accel -- accel/accel.sh@41 -- # jq -r . 00:07:59.466 [2024-07-22 20:15:11.280712] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:59.466 [2024-07-22 20:15:11.280846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402497 ] 00:07:59.466 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.466 [2024-07-22 20:15:11.400836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.726 [2024-07-22 20:15:11.582908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.296 20:15:12 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.296 20:15:12 accel -- common/autotest_common.sh@862 -- # return 0 00:08:00.296 20:15:12 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:00.296 20:15:12 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:00.296 20:15:12 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:00.296 20:15:12 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:00.296 20:15:12 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:00.296 20:15:12 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:00.296 20:15:12 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:00.296 20:15:12 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.296 20:15:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.296 20:15:12 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.296 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.296 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.296 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.296 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.296 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.296 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.296 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.296 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.296 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.296 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.296 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.296 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # IFS== 00:08:00.297 20:15:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:00.297 20:15:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:00.297 20:15:12 accel -- accel/accel.sh@75 -- # killprocess 3402497 00:08:00.297 20:15:12 accel -- common/autotest_common.sh@948 -- # '[' -z 3402497 ']' 00:08:00.297 20:15:12 accel -- common/autotest_common.sh@952 -- # kill -0 3402497 00:08:00.297 20:15:12 accel -- common/autotest_common.sh@953 -- # uname 00:08:00.297 20:15:12 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.297 20:15:12 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3402497 00:08:00.297 20:15:12 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:00.297 20:15:12 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:00.297 20:15:12 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3402497' 00:08:00.297 killing process with pid 3402497 00:08:00.297 20:15:12 accel -- common/autotest_common.sh@967 -- # kill 3402497 00:08:00.297 20:15:12 accel -- common/autotest_common.sh@972 -- # wait 3402497 00:08:02.208 20:15:13 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:02.208 20:15:13 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:02.208 20:15:13 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:02.208 20:15:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.208 20:15:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.208 20:15:13 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:08:02.208 20:15:13 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:02.208 20:15:13 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:02.208 20:15:13 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.208 20:15:13 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.208 20:15:13 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.208 20:15:13 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.208 20:15:13 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.208 20:15:13 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:02.208 20:15:13 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:02.208 20:15:14 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.208 20:15:14 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:02.208 20:15:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:02.208 20:15:14 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:02.208 20:15:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:02.208 20:15:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.208 20:15:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.208 ************************************ 00:08:02.208 START TEST accel_missing_filename 00:08:02.208 ************************************ 00:08:02.208 20:15:14 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:08:02.208 20:15:14 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:02.208 20:15:14 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:02.208 20:15:14 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:02.208 20:15:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.208 20:15:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:02.208 20:15:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.208 20:15:14 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:02.208 20:15:14 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:02.208 20:15:14 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:02.208 20:15:14 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.208 20:15:14 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.208 20:15:14 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.208 20:15:14 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.208 20:15:14 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.208 20:15:14 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:02.208 20:15:14 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:02.208 [2024-07-22 20:15:14.132661] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:02.208 [2024-07-22 20:15:14.132768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403163 ] 00:08:02.208 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.468 [2024-07-22 20:15:14.242540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.468 [2024-07-22 20:15:14.417870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.728 [2024-07-22 20:15:14.561703] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:02.989 [2024-07-22 20:15:14.923611] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:03.249 A filename is required. 00:08:03.249 20:15:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:03.249 20:15:15 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:03.249 20:15:15 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:03.249 20:15:15 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:03.249 20:15:15 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:03.249 20:15:15 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:03.249 00:08:03.249 real 0m1.130s 00:08:03.249 user 0m0.989s 00:08:03.249 sys 0m0.180s 00:08:03.249 20:15:15 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.249 20:15:15 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:03.249 ************************************ 00:08:03.249 END TEST accel_missing_filename 00:08:03.249 ************************************ 00:08:03.249 20:15:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.249 20:15:15 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:03.249 20:15:15 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:03.249 20:15:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.249 20:15:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.508 ************************************ 00:08:03.508 START TEST accel_compress_verify 00:08:03.508 ************************************ 00:08:03.508 20:15:15 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:03.508 20:15:15 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:03.508 20:15:15 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:03.508 20:15:15 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:03.508 20:15:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.508 20:15:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:03.508 20:15:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.508 20:15:15 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:03.508 20:15:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:03.508 20:15:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:03.508 20:15:15 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.508 20:15:15 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.508 20:15:15 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.508 20:15:15 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.508 20:15:15 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.508 20:15:15 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:03.508 20:15:15 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:03.509 [2024-07-22 20:15:15.334661] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:03.509 [2024-07-22 20:15:15.334765] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403508 ] 00:08:03.509 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.509 [2024-07-22 20:15:15.450181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.768 [2024-07-22 20:15:15.626058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.768 [2024-07-22 20:15:15.769468] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.338 [2024-07-22 20:15:16.130837] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:04.642 00:08:04.642 Compression does not support the verify option, aborting. 00:08:04.642 20:15:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:04.642 20:15:16 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.642 20:15:16 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:04.642 20:15:16 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:04.642 20:15:16 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:04.642 20:15:16 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.642 00:08:04.642 real 0m1.132s 00:08:04.642 user 0m0.989s 00:08:04.642 sys 0m0.179s 00:08:04.642 20:15:16 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.642 20:15:16 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:04.642 ************************************ 00:08:04.642 END TEST accel_compress_verify 00:08:04.642 ************************************ 00:08:04.642 20:15:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.642 20:15:16 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:04.642 20:15:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:04.642 20:15:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.642 20:15:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.642 ************************************ 00:08:04.642 START TEST accel_wrong_workload 00:08:04.642 ************************************ 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:04.642 20:15:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:04.642 20:15:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:04.642 20:15:16 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.642 20:15:16 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.642 20:15:16 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.642 20:15:16 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.642 20:15:16 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.642 20:15:16 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:04.642 20:15:16 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:04.642 Unsupported workload type: foobar 00:08:04.642 [2024-07-22 20:15:16.524135] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:04.642 accel_perf options: 00:08:04.642 [-h help message] 00:08:04.642 [-q queue depth per core] 00:08:04.642 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:04.642 [-T number of threads per core 00:08:04.642 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:04.642 [-t time in seconds] 00:08:04.642 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:04.642 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:04.642 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:04.642 [-l for compress/decompress workloads, name of uncompressed input file 00:08:04.642 [-S for crc32c workload, use this seed value (default 0) 00:08:04.642 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:04.642 [-f for fill workload, use this BYTE value (default 255) 00:08:04.642 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:04.642 [-y verify result if this switch is on] 00:08:04.642 [-a tasks to allocate per core (default: same value as -q)] 00:08:04.642 Can be used to spread operations across a wider range of memory. 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.642 00:08:04.642 real 0m0.074s 00:08:04.642 user 0m0.079s 00:08:04.642 sys 0m0.039s 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.642 20:15:16 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:04.642 ************************************ 00:08:04.642 END TEST accel_wrong_workload 00:08:04.642 ************************************ 00:08:04.642 20:15:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.642 20:15:16 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:04.642 20:15:16 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:04.642 20:15:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.642 20:15:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.642 ************************************ 00:08:04.642 START TEST accel_negative_buffers 00:08:04.642 ************************************ 00:08:04.642 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:04.642 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:04.642 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:04.642 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:04.642 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.642 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:04.642 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.642 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:04.642 20:15:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:04.642 20:15:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:04.642 20:15:16 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.643 20:15:16 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.643 20:15:16 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.643 20:15:16 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.643 20:15:16 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.643 20:15:16 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:04.643 20:15:16 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:04.917 -x option must be non-negative. 00:08:04.917 [2024-07-22 20:15:16.677817] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:04.917 accel_perf options: 00:08:04.917 [-h help message] 00:08:04.917 [-q queue depth per core] 00:08:04.917 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:04.917 [-T number of threads per core 00:08:04.917 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:04.917 [-t time in seconds] 00:08:04.917 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:04.917 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:04.917 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:04.917 [-l for compress/decompress workloads, name of uncompressed input file 00:08:04.917 [-S for crc32c workload, use this seed value (default 0) 00:08:04.917 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:04.917 [-f for fill workload, use this BYTE value (default 255) 00:08:04.917 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:04.917 [-y verify result if this switch is on] 00:08:04.917 [-a tasks to allocate per core (default: same value as -q)] 00:08:04.917 Can be used to spread operations across a wider range of memory. 00:08:04.917 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:04.917 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.917 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.917 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.917 00:08:04.917 real 0m0.083s 00:08:04.917 user 0m0.086s 00:08:04.917 sys 0m0.042s 00:08:04.917 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.917 20:15:16 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:04.917 ************************************ 00:08:04.917 END TEST accel_negative_buffers 00:08:04.917 ************************************ 00:08:04.917 20:15:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.917 20:15:16 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:04.917 20:15:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:04.917 20:15:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.917 20:15:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.917 ************************************ 00:08:04.917 START TEST accel_crc32c 00:08:04.917 ************************************ 00:08:04.917 20:15:16 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:04.917 20:15:16 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:04.917 20:15:16 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:04.917 20:15:16 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:04.917 20:15:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.917 20:15:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.918 20:15:16 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:04.918 20:15:16 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:04.918 20:15:16 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.918 20:15:16 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.918 20:15:16 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.918 20:15:16 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.918 20:15:16 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.918 20:15:16 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:04.918 20:15:16 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:04.918 [2024-07-22 20:15:16.805536] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:04.918 [2024-07-22 20:15:16.805630] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403882 ] 00:08:04.918 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.918 [2024-07-22 20:15:16.907574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.178 [2024-07-22 20:15:17.082575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:05.438 20:15:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.347 20:15:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.347 20:15:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.347 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.347 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.347 20:15:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.347 20:15:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:07.348 20:15:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.348 00:08:07.348 real 0m2.100s 00:08:07.348 user 0m1.964s 00:08:07.348 sys 0m0.150s 00:08:07.348 20:15:18 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.348 20:15:18 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:07.348 ************************************ 00:08:07.348 END TEST accel_crc32c 00:08:07.348 ************************************ 00:08:07.348 20:15:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:07.348 20:15:18 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:07.348 20:15:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:07.348 20:15:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.348 20:15:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.348 ************************************ 00:08:07.348 START TEST accel_crc32c_C2 00:08:07.348 ************************************ 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:07.348 20:15:18 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:07.348 [2024-07-22 20:15:18.997124] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:07.348 [2024-07-22 20:15:18.997238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404271 ] 00:08:07.348 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.348 [2024-07-22 20:15:19.111741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.348 [2024-07-22 20:15:19.287492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.609 20:15:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.521 00:08:09.521 real 0m2.133s 00:08:09.521 user 0m1.960s 00:08:09.521 sys 0m0.186s 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.521 20:15:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:09.521 ************************************ 00:08:09.521 END TEST accel_crc32c_C2 00:08:09.521 ************************************ 00:08:09.521 20:15:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.521 20:15:21 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:09.521 20:15:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:09.521 20:15:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.521 20:15:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.521 ************************************ 00:08:09.521 START TEST accel_copy 00:08:09.521 ************************************ 00:08:09.521 20:15:21 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:08:09.521 20:15:21 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:09.521 20:15:21 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:09.521 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.521 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.521 20:15:21 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:09.521 20:15:21 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:09.522 20:15:21 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:09.522 20:15:21 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.522 20:15:21 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.522 20:15:21 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.522 20:15:21 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.522 20:15:21 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.522 20:15:21 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:09.522 20:15:21 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:09.522 [2024-07-22 20:15:21.212436] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:09.522 [2024-07-22 20:15:21.212543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404634 ] 00:08:09.522 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.522 [2024-07-22 20:15:21.336152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.522 [2024-07-22 20:15:21.515699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.782 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.783 20:15:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:11.695 20:15:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.695 00:08:11.695 real 0m2.156s 00:08:11.695 user 0m1.958s 00:08:11.695 sys 0m0.209s 00:08:11.695 20:15:23 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.695 20:15:23 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:11.695 ************************************ 00:08:11.695 END TEST accel_copy 00:08:11.695 ************************************ 00:08:11.695 20:15:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.695 20:15:23 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:11.695 20:15:23 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:11.695 20:15:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.695 20:15:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.695 ************************************ 00:08:11.695 START TEST accel_fill 00:08:11.695 ************************************ 00:08:11.695 20:15:23 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:11.695 20:15:23 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:11.695 20:15:23 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:11.695 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.695 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.696 20:15:23 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:11.696 20:15:23 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:11.696 20:15:23 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:11.696 20:15:23 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.696 20:15:23 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.696 20:15:23 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.696 20:15:23 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.696 20:15:23 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.696 20:15:23 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:11.696 20:15:23 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:11.696 [2024-07-22 20:15:23.431645] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:11.696 [2024-07-22 20:15:23.431755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405200 ] 00:08:11.696 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.696 [2024-07-22 20:15:23.548053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.957 [2024-07-22 20:15:23.725648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.957 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.958 20:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:13.872 20:15:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.872 00:08:13.872 real 0m2.138s 00:08:13.872 user 0m1.983s 00:08:13.872 sys 0m0.167s 00:08:13.872 20:15:25 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.872 20:15:25 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 ************************************ 00:08:13.872 END TEST accel_fill 00:08:13.872 ************************************ 00:08:13.872 20:15:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:13.872 20:15:25 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:13.872 20:15:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:13.872 20:15:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.872 20:15:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 ************************************ 00:08:13.872 START TEST accel_copy_crc32c 00:08:13.872 ************************************ 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:13.872 20:15:25 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:13.872 [2024-07-22 20:15:25.633898] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:13.872 [2024-07-22 20:15:25.634007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405669 ] 00:08:13.872 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.872 [2024-07-22 20:15:25.751872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.133 [2024-07-22 20:15:25.929665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.133 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.134 20:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.046 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.047 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:16.047 20:15:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.047 00:08:16.047 real 0m2.134s 00:08:16.047 user 0m1.972s 00:08:16.047 sys 0m0.175s 00:08:16.047 20:15:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.047 20:15:27 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:16.047 ************************************ 00:08:16.047 END TEST accel_copy_crc32c 00:08:16.047 ************************************ 00:08:16.047 20:15:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:16.047 20:15:27 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:16.047 20:15:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:16.047 20:15:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.047 20:15:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.047 ************************************ 00:08:16.047 START TEST accel_copy_crc32c_C2 00:08:16.047 ************************************ 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:16.047 20:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:16.047 [2024-07-22 20:15:27.835027] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:16.047 [2024-07-22 20:15:27.835134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406029 ] 00:08:16.047 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.047 [2024-07-22 20:15:27.945845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.308 [2024-07-22 20:15:28.122573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.308 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.308 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.308 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.308 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.309 20:15:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.223 00:08:18.223 real 0m2.142s 00:08:18.223 user 0m1.971s 00:08:18.223 sys 0m0.184s 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.223 20:15:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:18.223 ************************************ 00:08:18.223 END TEST accel_copy_crc32c_C2 00:08:18.223 ************************************ 00:08:18.223 20:15:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:18.223 20:15:29 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:18.223 20:15:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:18.223 20:15:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.223 20:15:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.223 ************************************ 00:08:18.223 START TEST accel_dualcast 00:08:18.223 ************************************ 00:08:18.223 20:15:29 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:18.223 20:15:29 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:18.223 [2024-07-22 20:15:30.040090] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:18.223 [2024-07-22 20:15:30.040216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406524 ] 00:08:18.223 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.223 [2024-07-22 20:15:30.160804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.485 [2024-07-22 20:15:30.339060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.485 20:15:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:20.398 20:15:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.398 00:08:20.398 real 0m2.144s 00:08:20.398 user 0m1.974s 00:08:20.398 sys 0m0.182s 00:08:20.398 20:15:32 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.398 20:15:32 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:20.398 ************************************ 00:08:20.398 END TEST accel_dualcast 00:08:20.398 ************************************ 00:08:20.398 20:15:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:20.398 20:15:32 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:20.398 20:15:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:20.398 20:15:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.398 20:15:32 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.398 ************************************ 00:08:20.398 START TEST accel_compare 00:08:20.398 ************************************ 00:08:20.398 20:15:32 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:20.398 20:15:32 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:20.398 [2024-07-22 20:15:32.237531] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:20.398 [2024-07-22 20:15:32.237629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407071 ] 00:08:20.398 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.398 [2024-07-22 20:15:32.345088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.659 [2024-07-22 20:15:32.520214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.659 20:15:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:22.571 20:15:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.571 00:08:22.571 real 0m2.110s 00:08:22.571 user 0m1.953s 00:08:22.571 sys 0m0.169s 00:08:22.571 20:15:34 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.571 20:15:34 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:22.571 ************************************ 00:08:22.571 END TEST accel_compare 00:08:22.571 ************************************ 00:08:22.571 20:15:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:22.571 20:15:34 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:22.571 20:15:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:22.571 20:15:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.571 20:15:34 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.571 ************************************ 00:08:22.571 START TEST accel_xor 00:08:22.571 ************************************ 00:08:22.571 20:15:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:22.571 20:15:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:22.571 [2024-07-22 20:15:34.435459] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:22.571 [2024-07-22 20:15:34.435568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407433 ] 00:08:22.571 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.571 [2024-07-22 20:15:34.547608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.832 [2024-07-22 20:15:34.723443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.092 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.093 20:15:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.004 00:08:25.004 real 0m2.133s 00:08:25.004 user 0m1.967s 00:08:25.004 sys 0m0.178s 00:08:25.004 20:15:36 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.004 20:15:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:25.004 ************************************ 00:08:25.004 END TEST accel_xor 00:08:25.004 ************************************ 00:08:25.004 20:15:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:25.004 20:15:36 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:25.004 20:15:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:25.004 20:15:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.004 20:15:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.004 ************************************ 00:08:25.004 START TEST accel_xor 00:08:25.004 ************************************ 00:08:25.004 20:15:36 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:25.004 20:15:36 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:25.005 20:15:36 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.005 20:15:36 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.005 20:15:36 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.005 20:15:36 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.005 20:15:36 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.005 20:15:36 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:25.005 20:15:36 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:25.005 [2024-07-22 20:15:36.648225] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:25.005 [2024-07-22 20:15:36.648347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407831 ] 00:08:25.005 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.005 [2024-07-22 20:15:36.769525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.005 [2024-07-22 20:15:36.948189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.265 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.266 20:15:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:27.218 20:15:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.218 00:08:27.218 real 0m2.157s 00:08:27.218 user 0m1.978s 00:08:27.218 sys 0m0.191s 00:08:27.218 20:15:38 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.218 20:15:38 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:27.218 ************************************ 00:08:27.218 END TEST accel_xor 00:08:27.218 ************************************ 00:08:27.218 20:15:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:27.218 20:15:38 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:27.218 20:15:38 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:27.218 20:15:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.218 20:15:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.218 ************************************ 00:08:27.218 START TEST accel_dif_verify 00:08:27.218 ************************************ 00:08:27.218 20:15:38 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:27.218 20:15:38 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:27.218 [2024-07-22 20:15:38.873719] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:27.218 [2024-07-22 20:15:38.873831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408469 ] 00:08:27.218 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.218 [2024-07-22 20:15:38.992206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.218 [2024-07-22 20:15:39.169938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.479 20:15:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:29.392 20:15:40 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:29.392 00:08:29.392 real 0m2.150s 00:08:29.392 user 0m1.971s 00:08:29.392 sys 0m0.193s 00:08:29.392 20:15:40 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.392 20:15:40 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:29.392 ************************************ 00:08:29.392 END TEST accel_dif_verify 00:08:29.392 ************************************ 00:08:29.392 20:15:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:29.392 20:15:41 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:29.392 20:15:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:29.392 20:15:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.392 20:15:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.392 ************************************ 00:08:29.392 START TEST accel_dif_generate 00:08:29.392 ************************************ 00:08:29.392 20:15:41 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:29.392 20:15:41 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:29.392 [2024-07-22 20:15:41.085684] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:29.392 [2024-07-22 20:15:41.085796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408834 ] 00:08:29.392 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.392 [2024-07-22 20:15:41.205578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.392 [2024-07-22 20:15:41.383159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.654 20:15:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:31.568 20:15:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.568 00:08:31.568 real 0m2.141s 00:08:31.568 user 0m1.972s 00:08:31.568 sys 0m0.183s 00:08:31.568 20:15:43 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.568 20:15:43 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:31.568 ************************************ 00:08:31.568 END TEST accel_dif_generate 00:08:31.568 ************************************ 00:08:31.568 20:15:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:31.568 20:15:43 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:31.568 20:15:43 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:31.568 20:15:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.568 20:15:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.568 ************************************ 00:08:31.568 START TEST accel_dif_generate_copy 00:08:31.568 ************************************ 00:08:31.568 20:15:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:31.568 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:31.568 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:31.568 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:31.568 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.568 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:31.568 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.569 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:31.569 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.569 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.569 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.569 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.569 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.569 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:31.569 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:31.569 [2024-07-22 20:15:43.275347] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:31.569 [2024-07-22 20:15:43.275426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409207 ] 00:08:31.569 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.569 [2024-07-22 20:15:43.368933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.569 [2024-07-22 20:15:43.543421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.830 20:15:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.743 00:08:33.743 real 0m2.088s 00:08:33.743 user 0m1.960s 00:08:33.743 sys 0m0.142s 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.743 20:15:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:33.743 ************************************ 00:08:33.743 END TEST accel_dif_generate_copy 00:08:33.743 ************************************ 00:08:33.743 20:15:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.743 20:15:45 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:33.743 20:15:45 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:33.743 20:15:45 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:33.743 20:15:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.743 20:15:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.743 ************************************ 00:08:33.744 START TEST accel_comp 00:08:33.744 ************************************ 00:08:33.744 20:15:45 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:33.744 20:15:45 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:33.744 [2024-07-22 20:15:45.460573] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:33.744 [2024-07-22 20:15:45.460688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409785 ] 00:08:33.744 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.744 [2024-07-22 20:15:45.581499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.744 [2024-07-22 20:15:45.759032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.005 20:15:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:35.919 20:15:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:35.919 00:08:35.919 real 0m2.148s 00:08:35.919 user 0m1.973s 00:08:35.919 sys 0m0.189s 00:08:35.919 20:15:47 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.919 20:15:47 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:35.919 ************************************ 00:08:35.919 END TEST accel_comp 00:08:35.919 ************************************ 00:08:35.919 20:15:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:35.919 20:15:47 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:35.919 20:15:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:35.919 20:15:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.919 20:15:47 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.919 ************************************ 00:08:35.919 START TEST accel_decomp 00:08:35.919 ************************************ 00:08:35.919 20:15:47 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:35.919 20:15:47 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:35.919 [2024-07-22 20:15:47.676008] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:35.919 [2024-07-22 20:15:47.676116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410234 ] 00:08:35.919 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.919 [2024-07-22 20:15:47.791270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.180 [2024-07-22 20:15:47.967738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.180 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.181 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:36.181 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.181 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.181 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:36.181 20:15:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:36.181 20:15:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:36.181 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:36.181 20:15:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:38.094 20:15:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:38.094 00:08:38.094 real 0m2.139s 00:08:38.094 user 0m1.977s 00:08:38.094 sys 0m0.176s 00:08:38.094 20:15:49 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.094 20:15:49 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:38.094 ************************************ 00:08:38.094 END TEST accel_decomp 00:08:38.094 ************************************ 00:08:38.094 20:15:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:38.094 20:15:49 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:38.094 20:15:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:38.094 20:15:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.094 20:15:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:38.094 ************************************ 00:08:38.094 START TEST accel_decomp_full 00:08:38.094 ************************************ 00:08:38.094 20:15:49 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:38.094 20:15:49 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:38.094 [2024-07-22 20:15:49.892759] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:38.094 [2024-07-22 20:15:49.892864] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410597 ] 00:08:38.094 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.094 [2024-07-22 20:15:50.014151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.355 [2024-07-22 20:15:50.197151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.355 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:38.355 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.355 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.355 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.355 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:38.355 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.355 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.355 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.356 20:15:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:40.270 20:15:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:40.270 00:08:40.270 real 0m2.177s 00:08:40.270 user 0m1.999s 00:08:40.270 sys 0m0.192s 00:08:40.270 20:15:52 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.270 20:15:52 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:40.270 ************************************ 00:08:40.270 END TEST accel_decomp_full 00:08:40.270 ************************************ 00:08:40.270 20:15:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:40.270 20:15:52 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:40.270 20:15:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:40.270 20:15:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.270 20:15:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:40.270 ************************************ 00:08:40.270 START TEST accel_decomp_mcore 00:08:40.270 ************************************ 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:40.270 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:40.270 [2024-07-22 20:15:52.131624] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:40.270 [2024-07-22 20:15:52.131734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411126 ] 00:08:40.270 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.270 [2024-07-22 20:15:52.248274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.530 [2024-07-22 20:15:52.428450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.531 [2024-07-22 20:15:52.428630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.531 [2024-07-22 20:15:52.428748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.531 [2024-07-22 20:15:52.428778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.791 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.792 20:15:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.704 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:42.705 00:08:42.705 real 0m2.160s 00:08:42.705 user 0m6.537s 00:08:42.705 sys 0m0.200s 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.705 20:15:54 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:42.705 ************************************ 00:08:42.705 END TEST accel_decomp_mcore 00:08:42.705 ************************************ 00:08:42.705 20:15:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:42.705 20:15:54 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:42.705 20:15:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:42.705 20:15:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.705 20:15:54 accel -- common/autotest_common.sh@10 -- # set +x 00:08:42.705 ************************************ 00:08:42.705 START TEST accel_decomp_full_mcore 00:08:42.705 ************************************ 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:42.705 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:42.705 [2024-07-22 20:15:54.367616] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:42.705 [2024-07-22 20:15:54.367761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411634 ] 00:08:42.705 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.705 [2024-07-22 20:15:54.489463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.705 [2024-07-22 20:15:54.671092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.705 [2024-07-22 20:15:54.671176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.705 [2024-07-22 20:15:54.671291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.705 [2024-07-22 20:15:54.671315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.965 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.966 20:15:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:44.877 00:08:44.877 real 0m2.199s 00:08:44.877 user 0m6.643s 00:08:44.877 sys 0m0.209s 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.877 20:15:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:44.877 ************************************ 00:08:44.877 END TEST accel_decomp_full_mcore 00:08:44.877 ************************************ 00:08:44.877 20:15:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:44.877 20:15:56 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:44.877 20:15:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:44.877 20:15:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.877 20:15:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.877 ************************************ 00:08:44.877 START TEST accel_decomp_mthread 00:08:44.877 ************************************ 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:44.877 20:15:56 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:44.877 [2024-07-22 20:15:56.630195] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:44.877 [2024-07-22 20:15:56.630306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412011 ] 00:08:44.877 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.877 [2024-07-22 20:15:56.749299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.138 [2024-07-22 20:15:56.930160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.138 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.138 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.138 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.138 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.138 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.139 20:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:47.050 00:08:47.050 real 0m2.155s 00:08:47.050 user 0m1.978s 00:08:47.050 sys 0m0.192s 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.050 20:15:58 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:47.050 ************************************ 00:08:47.050 END TEST accel_decomp_mthread 00:08:47.050 ************************************ 00:08:47.050 20:15:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:47.050 20:15:58 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.050 20:15:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:47.050 20:15:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.050 20:15:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:47.050 ************************************ 00:08:47.050 START TEST accel_decomp_full_mthread 00:08:47.050 ************************************ 00:08:47.050 20:15:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.050 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:47.050 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:47.050 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.050 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.050 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.051 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.051 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:47.051 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:47.051 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:47.051 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.051 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.051 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:47.051 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:47.051 20:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:47.051 [2024-07-22 20:15:58.858752] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:47.051 [2024-07-22 20:15:58.858858] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412523 ] 00:08:47.051 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.051 [2024-07-22 20:15:58.977906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.311 [2024-07-22 20:15:59.155829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:47.311 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.312 20:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:49.238 00:08:49.238 real 0m2.189s 00:08:49.238 user 0m1.999s 00:08:49.238 sys 0m0.203s 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.238 20:16:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:49.238 ************************************ 00:08:49.238 END TEST accel_decomp_full_mthread 00:08:49.238 ************************************ 00:08:49.238 20:16:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:49.238 20:16:01 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:49.238 20:16:01 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:49.238 20:16:01 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:49.238 20:16:01 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:49.238 20:16:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.238 20:16:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.238 20:16:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:49.238 20:16:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:49.238 20:16:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.238 20:16:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.238 20:16:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:49.238 20:16:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:49.238 20:16:01 accel -- accel/accel.sh@41 -- # jq -r . 00:08:49.238 ************************************ 00:08:49.238 START TEST accel_dif_functional_tests 00:08:49.238 ************************************ 00:08:49.238 20:16:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:49.238 [2024-07-22 20:16:01.155326] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:49.238 [2024-07-22 20:16:01.155437] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413057 ] 00:08:49.238 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.570 [2024-07-22 20:16:01.278990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:49.570 [2024-07-22 20:16:01.459351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.570 [2024-07-22 20:16:01.459525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.570 [2024-07-22 20:16:01.459530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.830 00:08:49.830 00:08:49.830 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.830 http://cunit.sourceforge.net/ 00:08:49.830 00:08:49.830 00:08:49.830 Suite: accel_dif 00:08:49.830 Test: verify: DIF generated, GUARD check ...passed 00:08:49.830 Test: verify: DIF generated, APPTAG check ...passed 00:08:49.830 Test: verify: DIF generated, REFTAG check ...passed 00:08:49.830 Test: verify: DIF not generated, GUARD check ...[2024-07-22 20:16:01.681686] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:49.830 passed 00:08:49.830 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 20:16:01.681752] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:49.830 passed 00:08:49.830 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 20:16:01.681784] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:49.830 passed 00:08:49.830 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:49.830 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 20:16:01.681861] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:49.830 passed 00:08:49.830 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:49.830 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:49.830 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:49.830 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 20:16:01.682021] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:49.830 passed 00:08:49.830 Test: verify copy: DIF generated, GUARD check ...passed 00:08:49.830 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:49.830 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:49.830 Test: verify copy: DIF not generated, GUARD check ...[2024-07-22 20:16:01.682219] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:49.830 passed 00:08:49.830 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-22 20:16:01.682261] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:49.830 passed 00:08:49.830 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 20:16:01.682303] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:49.830 passed 00:08:49.830 Test: generate copy: DIF generated, GUARD check ...passed 00:08:49.830 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:49.830 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:49.830 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:49.830 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:49.830 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:49.830 Test: generate copy: iovecs-len validate ...[2024-07-22 20:16:01.682619] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:49.830 passed 00:08:49.830 Test: generate copy: buffer alignment validate ...passed 00:08:49.830 00:08:49.830 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.830 suites 1 1 n/a 0 0 00:08:49.830 tests 26 26 26 0 0 00:08:49.830 asserts 115 115 115 0 n/a 00:08:49.830 00:08:49.830 Elapsed time = 0.003 seconds 00:08:50.772 00:08:50.772 real 0m1.452s 00:08:50.772 user 0m2.757s 00:08:50.772 sys 0m0.250s 00:08:50.772 20:16:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.772 20:16:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:50.772 ************************************ 00:08:50.772 END TEST accel_dif_functional_tests 00:08:50.772 ************************************ 00:08:50.772 20:16:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:50.772 00:08:50.772 real 0m51.473s 00:08:50.772 user 0m57.056s 00:08:50.772 sys 0m6.153s 00:08:50.772 20:16:02 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.772 20:16:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:50.772 ************************************ 00:08:50.772 END TEST accel 00:08:50.772 ************************************ 00:08:50.772 20:16:02 -- common/autotest_common.sh@1142 -- # return 0 00:08:50.772 20:16:02 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:50.772 20:16:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:50.772 20:16:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.772 20:16:02 -- common/autotest_common.sh@10 -- # set +x 00:08:50.772 ************************************ 00:08:50.772 START TEST accel_rpc 00:08:50.772 ************************************ 00:08:50.772 20:16:02 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:50.772 * Looking for test storage... 00:08:50.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:50.773 20:16:02 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:50.773 20:16:02 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:50.773 20:16:02 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3413453 00:08:50.773 20:16:02 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3413453 00:08:50.773 20:16:02 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3413453 ']' 00:08:50.773 20:16:02 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.773 20:16:02 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.773 20:16:02 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.773 20:16:02 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.773 20:16:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.033 [2024-07-22 20:16:02.808757] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:51.033 [2024-07-22 20:16:02.808884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413453 ] 00:08:51.033 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.033 [2024-07-22 20:16:02.919969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.293 [2024-07-22 20:16:03.096570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.553 20:16:03 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.553 20:16:03 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:51.553 20:16:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:51.553 20:16:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:51.553 20:16:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:51.553 20:16:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:51.553 20:16:03 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:51.553 20:16:03 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:51.553 20:16:03 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.553 20:16:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.553 ************************************ 00:08:51.553 START TEST accel_assign_opcode 00:08:51.553 ************************************ 00:08:51.553 20:16:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:51.553 20:16:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:51.553 20:16:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.553 20:16:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:51.553 [2024-07-22 20:16:03.574335] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:51.814 20:16:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.814 20:16:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:51.814 20:16:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.814 20:16:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:51.814 [2024-07-22 20:16:03.586346] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:51.814 20:16:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.814 20:16:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:51.814 20:16:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.814 20:16:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:52.385 20:16:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.385 20:16:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:52.385 20:16:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:52.385 20:16:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.385 20:16:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:52.385 20:16:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:52.385 20:16:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.385 software 00:08:52.385 00:08:52.385 real 0m0.624s 00:08:52.385 user 0m0.050s 00:08:52.385 sys 0m0.011s 00:08:52.385 20:16:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.385 20:16:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:52.385 ************************************ 00:08:52.385 END TEST accel_assign_opcode 00:08:52.385 ************************************ 00:08:52.385 20:16:04 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:52.385 20:16:04 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3413453 00:08:52.385 20:16:04 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3413453 ']' 00:08:52.385 20:16:04 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3413453 00:08:52.385 20:16:04 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:52.385 20:16:04 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:52.385 20:16:04 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3413453 00:08:52.385 20:16:04 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:52.385 20:16:04 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:52.385 20:16:04 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3413453' 00:08:52.385 killing process with pid 3413453 00:08:52.385 20:16:04 accel_rpc -- common/autotest_common.sh@967 -- # kill 3413453 00:08:52.385 20:16:04 accel_rpc -- common/autotest_common.sh@972 -- # wait 3413453 00:08:54.298 00:08:54.298 real 0m3.271s 00:08:54.298 user 0m3.211s 00:08:54.298 sys 0m0.515s 00:08:54.298 20:16:05 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.298 20:16:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.298 ************************************ 00:08:54.298 END TEST accel_rpc 00:08:54.298 ************************************ 00:08:54.298 20:16:05 -- common/autotest_common.sh@1142 -- # return 0 00:08:54.298 20:16:05 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:54.298 20:16:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:54.298 20:16:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.298 20:16:05 -- common/autotest_common.sh@10 -- # set +x 00:08:54.298 ************************************ 00:08:54.298 START TEST app_cmdline 00:08:54.298 ************************************ 00:08:54.298 20:16:05 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:54.298 * Looking for test storage... 00:08:54.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:54.298 20:16:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:54.298 20:16:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3414202 00:08:54.298 20:16:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3414202 00:08:54.298 20:16:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:54.298 20:16:06 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3414202 ']' 00:08:54.298 20:16:06 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.298 20:16:06 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.298 20:16:06 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.298 20:16:06 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.298 20:16:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:54.298 [2024-07-22 20:16:06.175292] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:54.298 [2024-07-22 20:16:06.175425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414202 ] 00:08:54.298 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.298 [2024-07-22 20:16:06.303100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.558 [2024-07-22 20:16:06.484466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.129 20:16:07 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.129 20:16:07 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:55.129 20:16:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:55.391 { 00:08:55.391 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:08:55.391 "fields": { 00:08:55.391 "major": 24, 00:08:55.391 "minor": 9, 00:08:55.391 "patch": 0, 00:08:55.391 "suffix": "-pre", 00:08:55.391 "commit": "f7b31b2b9" 00:08:55.391 } 00:08:55.391 } 00:08:55.391 20:16:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:55.391 20:16:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:55.391 20:16:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:55.391 20:16:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:55.391 20:16:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:55.391 20:16:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.391 20:16:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.391 20:16:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:55.391 20:16:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:55.391 20:16:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:55.391 20:16:07 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:55.652 request: 00:08:55.652 { 00:08:55.652 "method": "env_dpdk_get_mem_stats", 00:08:55.652 "req_id": 1 00:08:55.652 } 00:08:55.652 Got JSON-RPC error response 00:08:55.652 response: 00:08:55.652 { 00:08:55.652 "code": -32601, 00:08:55.652 "message": "Method not found" 00:08:55.652 } 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:55.652 20:16:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3414202 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3414202 ']' 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3414202 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3414202 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3414202' 00:08:55.652 killing process with pid 3414202 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@967 -- # kill 3414202 00:08:55.652 20:16:07 app_cmdline -- common/autotest_common.sh@972 -- # wait 3414202 00:08:57.565 00:08:57.565 real 0m3.135s 00:08:57.565 user 0m3.320s 00:08:57.565 sys 0m0.531s 00:08:57.565 20:16:09 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.565 20:16:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:57.565 ************************************ 00:08:57.565 END TEST app_cmdline 00:08:57.565 ************************************ 00:08:57.565 20:16:09 -- common/autotest_common.sh@1142 -- # return 0 00:08:57.565 20:16:09 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:57.565 20:16:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:57.565 20:16:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.565 20:16:09 -- common/autotest_common.sh@10 -- # set +x 00:08:57.565 ************************************ 00:08:57.565 START TEST version 00:08:57.565 ************************************ 00:08:57.565 20:16:09 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:57.565 * Looking for test storage... 00:08:57.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:57.565 20:16:09 version -- app/version.sh@17 -- # get_header_version major 00:08:57.565 20:16:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:57.565 20:16:09 version -- app/version.sh@14 -- # cut -f2 00:08:57.565 20:16:09 version -- app/version.sh@14 -- # tr -d '"' 00:08:57.565 20:16:09 version -- app/version.sh@17 -- # major=24 00:08:57.565 20:16:09 version -- app/version.sh@18 -- # get_header_version minor 00:08:57.565 20:16:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:57.565 20:16:09 version -- app/version.sh@14 -- # cut -f2 00:08:57.565 20:16:09 version -- app/version.sh@14 -- # tr -d '"' 00:08:57.565 20:16:09 version -- app/version.sh@18 -- # minor=9 00:08:57.565 20:16:09 version -- app/version.sh@19 -- # get_header_version patch 00:08:57.565 20:16:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:57.565 20:16:09 version -- app/version.sh@14 -- # cut -f2 00:08:57.565 20:16:09 version -- app/version.sh@14 -- # tr -d '"' 00:08:57.565 20:16:09 version -- app/version.sh@19 -- # patch=0 00:08:57.565 20:16:09 version -- app/version.sh@20 -- # get_header_version suffix 00:08:57.565 20:16:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:57.565 20:16:09 version -- app/version.sh@14 -- # cut -f2 00:08:57.565 20:16:09 version -- app/version.sh@14 -- # tr -d '"' 00:08:57.565 20:16:09 version -- app/version.sh@20 -- # suffix=-pre 00:08:57.565 20:16:09 version -- app/version.sh@22 -- # version=24.9 00:08:57.565 20:16:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:57.565 20:16:09 version -- app/version.sh@28 -- # version=24.9rc0 00:08:57.565 20:16:09 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:57.565 20:16:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:57.565 20:16:09 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:57.565 20:16:09 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:57.565 00:08:57.565 real 0m0.179s 00:08:57.565 user 0m0.086s 00:08:57.565 sys 0m0.134s 00:08:57.565 20:16:09 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.565 20:16:09 version -- common/autotest_common.sh@10 -- # set +x 00:08:57.565 ************************************ 00:08:57.565 END TEST version 00:08:57.565 ************************************ 00:08:57.565 20:16:09 -- common/autotest_common.sh@1142 -- # return 0 00:08:57.565 20:16:09 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:57.565 20:16:09 -- spdk/autotest.sh@198 -- # uname -s 00:08:57.565 20:16:09 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:57.565 20:16:09 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:57.565 20:16:09 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:57.565 20:16:09 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:57.565 20:16:09 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:57.565 20:16:09 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:57.565 20:16:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:57.565 20:16:09 -- common/autotest_common.sh@10 -- # set +x 00:08:57.565 20:16:09 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:57.565 20:16:09 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:57.565 20:16:09 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:57.565 20:16:09 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:57.565 20:16:09 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:57.565 20:16:09 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:57.565 20:16:09 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:57.565 20:16:09 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:57.565 20:16:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.565 20:16:09 -- common/autotest_common.sh@10 -- # set +x 00:08:57.565 ************************************ 00:08:57.565 START TEST nvmf_tcp 00:08:57.565 ************************************ 00:08:57.565 20:16:09 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:57.565 * Looking for test storage... 00:08:57.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:57.565 20:16:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:57.565 20:16:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:57.565 20:16:09 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:57.565 20:16:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:57.565 20:16:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.565 20:16:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.827 ************************************ 00:08:57.827 START TEST nvmf_target_core 00:08:57.827 ************************************ 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:57.827 * Looking for test storage... 00:08:57.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.827 ************************************ 00:08:57.827 START TEST nvmf_abort 00:08:57.827 ************************************ 00:08:57.827 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:57.827 * Looking for test storage... 00:08:58.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.090 20:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:04.679 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:04.679 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.679 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:04.680 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:04.680 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.680 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:04.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:09:04.941 00:09:04.941 --- 10.0.0.2 ping statistics --- 00:09:04.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.941 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:09:04.941 00:09:04.941 --- 10.0.0.1 ping statistics --- 00:09:04.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.941 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.941 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3418657 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3418657 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3418657 ']' 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.942 20:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.203 [2024-07-22 20:16:17.028028] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:05.203 [2024-07-22 20:16:17.028129] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.203 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.203 [2024-07-22 20:16:17.168865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.465 [2024-07-22 20:16:17.372750] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.465 [2024-07-22 20:16:17.372816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.465 [2024-07-22 20:16:17.372831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.465 [2024-07-22 20:16:17.372841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.465 [2024-07-22 20:16:17.372853] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.465 [2024-07-22 20:16:17.373013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.465 [2024-07-22 20:16:17.373143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.465 [2024-07-22 20:16:17.373181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.036 [2024-07-22 20:16:17.844196] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.036 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.036 Malloc0 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.037 Delay0 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.037 [2024-07-22 20:16:17.961983] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.037 20:16:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:06.037 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.297 [2024-07-22 20:16:18.155419] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:08.211 Initializing NVMe Controllers 00:09:08.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:08.211 controller IO queue size 128 less than required 00:09:08.211 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:08.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:08.211 Initialization complete. Launching workers. 00:09:08.211 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31688 00:09:08.211 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31745, failed to submit 66 00:09:08.211 success 31688, unsuccess 57, failed 0 00:09:08.211 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:08.211 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.211 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:08.471 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:08.472 rmmod nvme_tcp 00:09:08.472 rmmod nvme_fabrics 00:09:08.472 rmmod nvme_keyring 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3418657 ']' 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3418657 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3418657 ']' 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3418657 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3418657 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3418657' 00:09:08.472 killing process with pid 3418657 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3418657 00:09:08.472 20:16:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3418657 00:09:09.414 20:16:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:09.414 20:16:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:09.414 20:16:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:09.414 20:16:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.414 20:16:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:09.414 20:16:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.414 20:16:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.414 20:16:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.329 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:11.329 00:09:11.329 real 0m13.495s 00:09:11.329 user 0m14.994s 00:09:11.329 sys 0m6.101s 00:09:11.329 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.329 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.329 ************************************ 00:09:11.329 END TEST nvmf_abort 00:09:11.329 ************************************ 00:09:11.329 20:16:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:11.329 20:16:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:11.329 20:16:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:11.329 20:16:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.329 20:16:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.329 ************************************ 00:09:11.329 START TEST nvmf_ns_hotplug_stress 00:09:11.329 ************************************ 00:09:11.329 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:11.591 * Looking for test storage... 00:09:11.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:11.591 20:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.262 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:18.263 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:18.263 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:18.263 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:18.263 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.263 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:18.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:09:18.524 00:09:18.524 --- 10.0.0.2 ping statistics --- 00:09:18.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.524 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:09:18.524 00:09:18.524 --- 10.0.0.1 ping statistics --- 00:09:18.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.524 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.524 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3423688 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3423688 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3423688 ']' 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.785 20:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.785 [2024-07-22 20:16:30.676988] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:18.785 [2024-07-22 20:16:30.677113] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.785 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.046 [2024-07-22 20:16:30.827785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:19.046 [2024-07-22 20:16:31.053664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.046 [2024-07-22 20:16:31.053735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.046 [2024-07-22 20:16:31.053750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.046 [2024-07-22 20:16:31.053760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.046 [2024-07-22 20:16:31.053772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.046 [2024-07-22 20:16:31.053946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.046 [2024-07-22 20:16:31.054080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.046 [2024-07-22 20:16:31.054111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.617 20:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.617 20:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:19.617 20:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:19.617 20:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:19.617 20:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:19.617 20:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.617 20:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:19.617 20:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:19.617 [2024-07-22 20:16:31.597920] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.877 20:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:19.877 20:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.138 [2024-07-22 20:16:31.954753] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.138 20:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.138 20:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:20.399 Malloc0 00:09:20.399 20:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:20.659 Delay0 00:09:20.659 20:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.920 20:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:20.920 NULL1 00:09:20.921 20:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:21.181 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:21.181 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3424078 00:09:21.181 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:21.181 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.181 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.181 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.443 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:21.443 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:21.702 [2024-07-22 20:16:33.493429] bdev.c:5060:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:09:21.702 true 00:09:21.702 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:21.702 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.702 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.962 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:21.962 20:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:22.223 true 00:09:22.223 20:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:22.223 20:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.223 20:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.484 20:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:22.484 20:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:22.745 true 00:09:22.745 20:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:22.745 20:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.745 20:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.005 20:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:23.005 20:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:23.266 true 00:09:23.266 20:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:23.266 20:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.266 20:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.527 20:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:23.527 20:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:23.788 true 00:09:23.788 20:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:23.788 20:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.788 20:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.049 20:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:24.050 20:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:24.311 true 00:09:24.311 20:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:24.311 20:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.311 20:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.572 20:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:24.572 20:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:24.834 true 00:09:24.834 20:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:24.834 20:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.834 20:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.094 20:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:25.094 20:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:25.356 true 00:09:25.356 20:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:25.356 20:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.356 20:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.617 20:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:25.617 20:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:25.617 true 00:09:25.877 20:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:25.877 20:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.877 20:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.139 20:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:26.139 20:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:26.139 true 00:09:26.139 20:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:26.139 20:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.400 20:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.661 20:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:26.661 20:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:26.661 true 00:09:26.661 20:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:26.661 20:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.922 20:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.183 20:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:27.183 20:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:27.183 true 00:09:27.183 20:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:27.183 20:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.444 20:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.705 20:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:27.705 20:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:27.705 true 00:09:27.705 20:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:27.705 20:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.966 20:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.966 20:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:27.966 20:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:28.227 true 00:09:28.227 20:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:28.227 20:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.488 20:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.488 20:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:28.488 20:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:28.749 true 00:09:28.749 20:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:28.749 20:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.011 20:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.011 20:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:29.011 20:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:29.271 true 00:09:29.271 20:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:29.271 20:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.532 20:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.532 20:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:29.532 20:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:29.793 true 00:09:29.793 20:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:29.793 20:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.793 20:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.055 20:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:30.055 20:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:30.316 true 00:09:30.316 20:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:30.316 20:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.316 20:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.577 20:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:30.577 20:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:30.577 true 00:09:30.577 20:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:30.838 20:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.838 20:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.099 20:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:31.099 20:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:31.099 true 00:09:31.099 20:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:31.099 20:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.361 20:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.361 20:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:31.361 20:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:31.621 true 00:09:31.622 20:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:31.622 20:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.883 20:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.883 20:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:31.883 20:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:32.144 true 00:09:32.144 20:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:32.144 20:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.405 20:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.405 20:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:32.405 20:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:32.666 true 00:09:32.666 20:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:32.666 20:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.998 20:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.998 20:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:32.998 20:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:32.998 true 00:09:33.284 20:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:33.284 20:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.284 20:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.545 20:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:33.545 20:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:33.545 true 00:09:33.545 20:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:33.545 20:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.805 20:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.066 20:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:34.066 20:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:34.066 true 00:09:34.066 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:34.066 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.327 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.327 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:34.327 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:34.590 true 00:09:34.590 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:34.590 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.851 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.851 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:34.851 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:35.112 true 00:09:35.112 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:35.112 20:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.373 20:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.373 20:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:35.373 20:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:35.634 true 00:09:35.634 20:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:35.634 20:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.895 20:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.895 20:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:35.895 20:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:36.156 true 00:09:36.156 20:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:36.156 20:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.156 20:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.417 20:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:36.417 20:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:36.678 true 00:09:36.678 20:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:36.678 20:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.678 20:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.939 20:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:36.939 20:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:37.199 true 00:09:37.199 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:37.199 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.199 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.460 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:37.460 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:37.460 true 00:09:37.721 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:37.721 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.721 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.982 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:37.982 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:37.982 true 00:09:37.983 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:37.983 20:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.244 20:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.505 20:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:38.505 20:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:38.505 true 00:09:38.505 20:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:38.505 20:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.767 20:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.028 20:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:39.028 20:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:39.028 true 00:09:39.028 20:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:39.028 20:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.289 20:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.550 20:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:39.550 20:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:39.550 true 00:09:39.550 20:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:39.550 20:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.811 20:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.073 20:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:40.073 20:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:40.073 true 00:09:40.073 20:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:40.073 20:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.334 20:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.595 20:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:40.595 20:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:40.595 true 00:09:40.595 20:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:40.595 20:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.856 20:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.856 20:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:09:40.856 20:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:41.117 true 00:09:41.117 20:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:41.117 20:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.378 20:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.378 20:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:09:41.378 20:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:09:41.639 true 00:09:41.639 20:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:41.639 20:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.900 20:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.900 20:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:09:41.900 20:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:09:42.161 true 00:09:42.161 20:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:42.161 20:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.423 20:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.423 20:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:09:42.423 20:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:09:42.684 true 00:09:42.684 20:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:42.684 20:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.945 20:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.945 20:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:09:42.945 20:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:09:43.206 true 00:09:43.206 20:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:43.206 20:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.206 20:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.467 20:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:09:43.467 20:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:09:43.728 true 00:09:43.728 20:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:43.728 20:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.728 20:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.989 20:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:09:43.989 20:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:09:44.249 true 00:09:44.249 20:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:44.249 20:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.249 20:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.509 20:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:09:44.509 20:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:09:44.509 true 00:09:44.769 20:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:44.769 20:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.769 20:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.030 20:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:09:45.030 20:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:09:45.030 true 00:09:45.030 20:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:45.030 20:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.291 20:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.551 20:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:09:45.551 20:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:09:45.551 true 00:09:45.551 20:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:45.551 20:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.812 20:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.072 20:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:09:46.072 20:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:09:46.072 true 00:09:46.072 20:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:46.072 20:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.332 20:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.592 20:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:09:46.592 20:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:09:46.592 true 00:09:46.592 20:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:46.592 20:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.853 20:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.113 20:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:09:47.113 20:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:09:47.113 true 00:09:47.113 20:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:47.113 20:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.374 20:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.634 20:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:09:47.634 20:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:09:47.634 true 00:09:47.634 20:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:47.634 20:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.895 20:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.895 20:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:09:47.895 20:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:09:48.156 true 00:09:48.156 20:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:48.156 20:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.416 20:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.416 20:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:09:48.416 20:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:09:48.677 true 00:09:48.677 20:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:48.677 20:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.937 20:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.937 20:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:09:48.937 20:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:09:49.231 true 00:09:49.231 20:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:49.231 20:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.231 20:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.492 20:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:09:49.492 20:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:09:49.753 true 00:09:49.753 20:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:49.753 20:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.754 20:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.015 20:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:09:50.015 20:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:09:50.015 true 00:09:50.276 20:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:50.276 20:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.276 20:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.536 20:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:09:50.536 20:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:09:50.536 true 00:09:50.797 20:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:50.797 20:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.797 20:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.058 20:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:09:51.058 20:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:09:51.058 true 00:09:51.058 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:51.058 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.318 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.318 Initializing NVMe Controllers 00:09:51.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:51.319 Controller IO queue size 128, less than required. 00:09:51.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:51.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:51.319 Initialization complete. Launching workers. 00:09:51.319 ======================================================== 00:09:51.319 Latency(us) 00:09:51.319 Device Information : IOPS MiB/s Average min max 00:09:51.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27269.80 13.32 4693.80 1797.95 11497.06 00:09:51.319 ======================================================== 00:09:51.319 Total : 27269.80 13.32 4693.80 1797.95 11497.06 00:09:51.319 00:09:51.319 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:09:51.319 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:09:51.580 true 00:09:51.580 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3424078 00:09:51.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3424078) - No such process 00:09:51.580 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3424078 00:09:51.580 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.840 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:51.841 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:51.841 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:51.841 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:51.841 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:51.841 20:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:52.103 null0 00:09:52.103 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:52.103 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:52.103 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:52.363 null1 00:09:52.363 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:52.363 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:52.363 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:52.363 null2 00:09:52.363 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:52.364 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:52.364 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:52.624 null3 00:09:52.624 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:52.624 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:52.624 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:52.624 null4 00:09:52.624 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:52.624 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:52.624 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:52.885 null5 00:09:52.885 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:52.885 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:52.885 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:53.147 null6 00:09:53.147 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:53.147 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:53.147 20:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:53.147 null7 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3430683 3430684 3430687 3430691 3430693 3430696 3430699 3430702 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.147 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:53.408 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:53.408 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:53.408 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:53.408 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:53.408 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.408 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:53.408 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:53.408 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:53.669 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:53.930 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.930 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:53.930 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.931 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:54.192 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:54.193 20:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:54.193 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.455 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:54.717 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.978 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:54.979 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:54.979 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:54.979 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:54.979 20:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.240 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.501 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.763 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:56.025 20:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.025 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:56.287 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.548 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.809 rmmod nvme_tcp 00:09:56.809 rmmod nvme_fabrics 00:09:56.809 rmmod nvme_keyring 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3423688 ']' 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3423688 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3423688 ']' 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3423688 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:56.809 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3423688 00:09:57.070 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:57.070 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:57.070 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3423688' 00:09:57.070 killing process with pid 3423688 00:09:57.070 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3423688 00:09:57.070 20:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3423688 00:09:57.640 20:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.640 20:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.640 20:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.640 20:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.640 20:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.640 20:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.640 20:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.640 20:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.184 00:10:00.184 real 0m48.262s 00:10:00.184 user 3m15.016s 00:10:00.184 sys 0m16.609s 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.184 ************************************ 00:10:00.184 END TEST nvmf_ns_hotplug_stress 00:10:00.184 ************************************ 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.184 ************************************ 00:10:00.184 START TEST nvmf_delete_subsystem 00:10:00.184 ************************************ 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:00.184 * Looking for test storage... 00:10:00.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.184 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.185 20:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:06.773 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:06.773 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:06.773 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:06.773 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.773 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:07.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:10:07.035 00:10:07.035 --- 10.0.0.2 ping statistics --- 00:10:07.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.035 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:10:07.035 00:10:07.035 --- 10.0.0.1 ping statistics --- 00:10:07.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.035 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:07.035 20:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3435961 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3435961 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3435961 ']' 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.035 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:07.297 [2024-07-22 20:17:19.137481] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:07.297 [2024-07-22 20:17:19.137602] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.297 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.297 [2024-07-22 20:17:19.271872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:07.557 [2024-07-22 20:17:19.452694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.557 [2024-07-22 20:17:19.452738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.557 [2024-07-22 20:17:19.452750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.557 [2024-07-22 20:17:19.452760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.557 [2024-07-22 20:17:19.452770] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.557 [2024-07-22 20:17:19.452924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.557 [2024-07-22 20:17:19.452956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.129 [2024-07-22 20:17:19.902307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.129 [2024-07-22 20:17:19.918551] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.129 NULL1 00:10:08.129 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.130 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:08.130 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.130 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.130 Delay0 00:10:08.130 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.130 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.130 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.130 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.130 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.130 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3436131 00:10:08.130 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:08.130 20:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:08.130 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.130 [2024-07-22 20:17:20.043823] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:10.041 20:17:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.041 20:17:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.041 20:17:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 starting I/O failed: -6 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Write completed with error (sct=0, sc=8) 00:10:10.304 starting I/O failed: -6 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 starting I/O failed: -6 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Write completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 starting I/O failed: -6 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Write completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Write completed with error (sct=0, sc=8) 00:10:10.304 starting I/O failed: -6 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 starting I/O failed: -6 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Write completed with error (sct=0, sc=8) 00:10:10.304 Write completed with error (sct=0, sc=8) 00:10:10.304 starting I/O failed: -6 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 starting I/O failed: -6 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Write completed with error (sct=0, sc=8) 00:10:10.304 Write completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 starting I/O failed: -6 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 starting I/O failed: -6 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Read completed with error (sct=0, sc=8) 00:10:10.304 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 starting I/O failed: -6 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 starting I/O failed: -6 00:10:10.305 [2024-07-22 20:17:22.309322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026a00 is same with the state(5) to be set 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 starting I/O failed: -6 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 starting I/O failed: -6 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 starting I/O failed: -6 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 starting I/O failed: -6 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 starting I/O failed: -6 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 starting I/O failed: -6 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 starting I/O failed: -6 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 starting I/O failed: -6 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 starting I/O failed: -6 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 [2024-07-22 20:17:22.312267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030000 is same with the state(5) to be set 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Read completed with error (sct=0, sc=8) 00:10:10.305 Write completed with error (sct=0, sc=8) 00:10:11.310 [2024-07-22 20:17:23.270980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025600 is same with the state(5) to be set 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 [2024-07-22 20:17:23.312674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025b00 is same with the state(5) to be set 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 [2024-07-22 20:17:23.313634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026500 is same with the state(5) to be set 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Read completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.310 Write completed with error (sct=0, sc=8) 00:10:11.311 Write completed with error (sct=0, sc=8) 00:10:11.311 [2024-07-22 20:17:23.315053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030500 is same with the state(5) to be set 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Write completed with error (sct=0, sc=8) 00:10:11.311 Write completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 Read completed with error (sct=0, sc=8) 00:10:11.311 [2024-07-22 20:17:23.315478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030f00 is same with the state(5) to be set 00:10:11.311 Initializing NVMe Controllers 00:10:11.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.311 Controller IO queue size 128, less than required. 00:10:11.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:11.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:11.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:11.311 Initialization complete. Launching workers. 00:10:11.311 ======================================================== 00:10:11.311 Latency(us) 00:10:11.311 Device Information : IOPS MiB/s Average min max 00:10:11.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.09 0.08 889148.17 381.07 1009378.16 00:10:11.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.68 0.08 947496.82 422.94 2002941.91 00:10:11.311 ======================================================== 00:10:11.311 Total : 327.77 0.16 916684.49 381.07 2002941.91 00:10:11.311 00:10:11.311 [2024-07-22 20:17:23.318278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000025600 (9): Bad file descriptor 00:10:11.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:11.311 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.311 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:11.311 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3436131 00:10:11.311 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3436131 00:10:11.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3436131) - No such process 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3436131 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3436131 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3436131 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.881 [2024-07-22 20:17:23.848231] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3436873 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3436873 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:11.881 20:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:12.142 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.142 [2024-07-22 20:17:23.955443] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:12.403 20:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:12.403 20:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3436873 00:10:12.403 20:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:12.973 20:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:12.973 20:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3436873 00:10:12.973 20:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:13.543 20:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:13.544 20:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3436873 00:10:13.544 20:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:14.114 20:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:14.114 20:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3436873 00:10:14.114 20:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:14.374 20:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:14.374 20:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3436873 00:10:14.374 20:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:14.944 20:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:14.944 20:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3436873 00:10:14.944 20:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:15.204 Initializing NVMe Controllers 00:10:15.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:15.204 Controller IO queue size 128, less than required. 00:10:15.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:15.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:15.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:15.204 Initialization complete. Launching workers. 00:10:15.205 ======================================================== 00:10:15.205 Latency(us) 00:10:15.205 Device Information : IOPS MiB/s Average min max 00:10:15.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002387.43 1000140.46 1007949.71 00:10:15.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003004.89 1000270.19 1010330.98 00:10:15.205 ======================================================== 00:10:15.205 Total : 256.00 0.12 1002696.16 1000140.46 1010330.98 00:10:15.205 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3436873 00:10:15.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3436873) - No such process 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3436873 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:15.466 rmmod nvme_tcp 00:10:15.466 rmmod nvme_fabrics 00:10:15.466 rmmod nvme_keyring 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3435961 ']' 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3435961 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3435961 ']' 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3435961 00:10:15.466 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:15.467 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:15.467 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3435961 00:10:15.727 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:15.727 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:15.727 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3435961' 00:10:15.727 killing process with pid 3435961 00:10:15.727 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3435961 00:10:15.727 20:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3435961 00:10:16.667 20:17:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:16.667 20:17:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:16.667 20:17:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:16.667 20:17:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.667 20:17:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:16.667 20:17:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.668 20:17:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.668 20:17:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.581 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:18.581 00:10:18.581 real 0m18.785s 00:10:18.581 user 0m31.867s 00:10:18.581 sys 0m6.430s 00:10:18.581 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.581 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.581 ************************************ 00:10:18.581 END TEST nvmf_delete_subsystem 00:10:18.581 ************************************ 00:10:18.581 20:17:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:18.581 20:17:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:18.581 20:17:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:18.581 20:17:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.581 20:17:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.581 ************************************ 00:10:18.581 START TEST nvmf_host_management 00:10:18.581 ************************************ 00:10:18.581 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:18.844 * Looking for test storage... 00:10:18.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:10:18.844 20:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:25.572 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.572 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:10:25.572 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:25.572 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:25.573 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:25.573 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:25.573 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:25.573 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:25.573 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:25.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:10:25.835 00:10:25.835 --- 10.0.0.2 ping statistics --- 00:10:25.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.835 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:10:25.835 00:10:25.835 --- 10.0.0.1 ping statistics --- 00:10:25.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.835 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3441885 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3441885 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3441885 ']' 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.835 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.836 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.836 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.836 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:25.836 20:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:25.836 [2024-07-22 20:17:37.814953] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:25.836 [2024-07-22 20:17:37.815077] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.097 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.097 [2024-07-22 20:17:37.968345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.358 [2024-07-22 20:17:38.207618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.358 [2024-07-22 20:17:38.207686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.358 [2024-07-22 20:17:38.207701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.358 [2024-07-22 20:17:38.207714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.358 [2024-07-22 20:17:38.207726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.358 [2024-07-22 20:17:38.207902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.358 [2024-07-22 20:17:38.208059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.358 [2024-07-22 20:17:38.208165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.358 [2024-07-22 20:17:38.208195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.619 [2024-07-22 20:17:38.605499] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.619 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.880 Malloc0 00:10:26.880 [2024-07-22 20:17:38.705957] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3442197 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3442197 /var/tmp/bdevperf.sock 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3442197 ']' 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:26.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:26.880 { 00:10:26.880 "params": { 00:10:26.880 "name": "Nvme$subsystem", 00:10:26.880 "trtype": "$TEST_TRANSPORT", 00:10:26.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:26.880 "adrfam": "ipv4", 00:10:26.880 "trsvcid": "$NVMF_PORT", 00:10:26.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:26.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:26.880 "hdgst": ${hdgst:-false}, 00:10:26.880 "ddgst": ${ddgst:-false} 00:10:26.880 }, 00:10:26.880 "method": "bdev_nvme_attach_controller" 00:10:26.880 } 00:10:26.880 EOF 00:10:26.880 )") 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:26.880 20:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:26.880 "params": { 00:10:26.880 "name": "Nvme0", 00:10:26.880 "trtype": "tcp", 00:10:26.880 "traddr": "10.0.0.2", 00:10:26.880 "adrfam": "ipv4", 00:10:26.880 "trsvcid": "4420", 00:10:26.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:26.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:26.880 "hdgst": false, 00:10:26.880 "ddgst": false 00:10:26.880 }, 00:10:26.880 "method": "bdev_nvme_attach_controller" 00:10:26.880 }' 00:10:26.880 [2024-07-22 20:17:38.841257] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:26.880 [2024-07-22 20:17:38.841365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3442197 ] 00:10:26.880 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.141 [2024-07-22 20:17:38.951300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.141 [2024-07-22 20:17:39.129407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.712 Running I/O for 10 seconds... 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:27.712 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:27.973 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:27.973 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:27.973 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:27.973 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:27.973 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.973 20:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:28.236 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.236 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:10:28.236 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:10:28.236 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:28.236 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:28.236 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:28.236 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:28.236 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.236 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:28.236 [2024-07-22 20:17:40.042887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.236 [2024-07-22 20:17:40.042941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.236 [2024-07-22 20:17:40.042953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.236 [2024-07-22 20:17:40.042962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.236 [2024-07-22 20:17:40.042971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.236 [2024-07-22 20:17:40.042981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.236 [2024-07-22 20:17:40.042992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:28.237 [2024-07-22 20:17:40.043744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.043793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.043823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.043837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.043850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.043861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.043875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.043885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.043899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.043910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.043924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.043935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.043948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.043959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.043973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.043984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.044002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.044013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.044026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.044036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.044049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.044060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.044073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.237 [2024-07-22 20:17:40.044083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.237 [2024-07-22 20:17:40.044096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.044980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.044991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.045003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.045014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.238 [2024-07-22 20:17:40.045026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.238 [2024-07-22 20:17:40.045037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:28.239 [2024-07-22 20:17:40.045330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389080 is same with the state(5) to be set 00:10:28.239 [2024-07-22 20:17:40.045555] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000389080 was disconnected and freed. reset controller. 00:10:28.239 [2024-07-22 20:17:40.045635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.239 [2024-07-22 20:17:40.045652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.239 [2024-07-22 20:17:40.045676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.239 [2024-07-22 20:17:40.045700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.239 [2024-07-22 20:17:40.045721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.239 [2024-07-22 20:17:40.045732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:10:28.239 [2024-07-22 20:17:40.047014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:28.239 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.239 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:28.239 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.239 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:28.239 task offset: 65536 on job bdev=Nvme0n1 fails 00:10:28.239 00:10:28.239 Latency(us) 00:10:28.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.239 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:28.239 Job: Nvme0n1 ended in about 0.44 seconds with error 00:10:28.239 Verification LBA range: start 0x0 length 0x400 00:10:28.239 Nvme0n1 : 0.44 1169.48 73.09 146.18 0.00 47198.32 5352.11 40413.87 00:10:28.239 =================================================================================================================== 00:10:28.239 Total : 1169.48 73.09 146.18 0.00 47198.32 5352.11 40413.87 00:10:28.239 [2024-07-22 20:17:40.051336] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:28.239 [2024-07-22 20:17:40.051369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:10:28.239 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.239 20:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:28.239 [2024-07-22 20:17:40.065717] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3442197 00:10:29.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3442197) - No such process 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:29.181 { 00:10:29.181 "params": { 00:10:29.181 "name": "Nvme$subsystem", 00:10:29.181 "trtype": "$TEST_TRANSPORT", 00:10:29.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.181 "adrfam": "ipv4", 00:10:29.181 "trsvcid": "$NVMF_PORT", 00:10:29.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.181 "hdgst": ${hdgst:-false}, 00:10:29.181 "ddgst": ${ddgst:-false} 00:10:29.181 }, 00:10:29.181 "method": "bdev_nvme_attach_controller" 00:10:29.181 } 00:10:29.181 EOF 00:10:29.181 )") 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:29.181 20:17:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:29.181 "params": { 00:10:29.181 "name": "Nvme0", 00:10:29.181 "trtype": "tcp", 00:10:29.181 "traddr": "10.0.0.2", 00:10:29.181 "adrfam": "ipv4", 00:10:29.181 "trsvcid": "4420", 00:10:29.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:29.181 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:29.181 "hdgst": false, 00:10:29.181 "ddgst": false 00:10:29.181 }, 00:10:29.181 "method": "bdev_nvme_attach_controller" 00:10:29.181 }' 00:10:29.181 [2024-07-22 20:17:41.152296] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:29.181 [2024-07-22 20:17:41.152410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3442582 ] 00:10:29.442 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.442 [2024-07-22 20:17:41.266775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.442 [2024-07-22 20:17:41.446278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.014 Running I/O for 1 seconds... 00:10:30.957 00:10:30.957 Latency(us) 00:10:30.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.957 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:30.957 Verification LBA range: start 0x0 length 0x400 00:10:30.957 Nvme0n1 : 1.01 1403.83 87.74 0.00 0.00 44743.84 2007.04 38010.88 00:10:30.957 =================================================================================================================== 00:10:30.957 Total : 1403.83 87.74 0.00 0.00 44743.84 2007.04 38010.88 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:31.898 rmmod nvme_tcp 00:10:31.898 rmmod nvme_fabrics 00:10:31.898 rmmod nvme_keyring 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3441885 ']' 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3441885 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3441885 ']' 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3441885 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3441885 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3441885' 00:10:31.898 killing process with pid 3441885 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3441885 00:10:31.898 20:17:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3441885 00:10:32.469 [2024-07-22 20:17:44.345891] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:32.469 20:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.469 20:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.469 20:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.469 20:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.469 20:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.469 20:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.469 20:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.469 20:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.461 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:34.722 00:10:34.722 real 0m15.948s 00:10:34.722 user 0m30.212s 00:10:34.722 sys 0m6.543s 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:34.722 ************************************ 00:10:34.722 END TEST nvmf_host_management 00:10:34.722 ************************************ 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.722 ************************************ 00:10:34.722 START TEST nvmf_lvol 00:10:34.722 ************************************ 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:34.722 * Looking for test storage... 00:10:34.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:34.722 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.723 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.723 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.723 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:34.723 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:34.723 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:10:34.723 20:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:42.868 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:42.868 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:42.868 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:42.868 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:42.868 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:42.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.746 ms 00:10:42.869 00:10:42.869 --- 10.0.0.2 ping statistics --- 00:10:42.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.869 rtt min/avg/max/mdev = 0.746/0.746/0.746/0.000 ms 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:10:42.869 00:10:42.869 --- 10.0.0.1 ping statistics --- 00:10:42.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.869 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3447554 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3447554 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3447554 ']' 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.869 20:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:42.869 [2024-07-22 20:17:53.973498] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:42.869 [2024-07-22 20:17:53.973626] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.869 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.869 [2024-07-22 20:17:54.108034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.869 [2024-07-22 20:17:54.289653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.869 [2024-07-22 20:17:54.289695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.869 [2024-07-22 20:17:54.289707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.869 [2024-07-22 20:17:54.289716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.869 [2024-07-22 20:17:54.289726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.869 [2024-07-22 20:17:54.289900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.869 [2024-07-22 20:17:54.289977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.869 [2024-07-22 20:17:54.289984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.869 20:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.869 20:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:10:42.869 20:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.869 20:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:42.869 20:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:42.869 20:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.869 20:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:42.869 [2024-07-22 20:17:54.884572] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.131 20:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.131 20:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:43.393 20:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.393 20:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:43.393 20:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:43.654 20:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:43.916 20:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=86ed00c3-ee3c-4ae4-8947-cc4fbbf6b120 00:10:43.916 20:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86ed00c3-ee3c-4ae4-8947-cc4fbbf6b120 lvol 20 00:10:43.916 20:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=82d4052a-dab5-4d97-bfc4-555a6bda3a41 00:10:43.916 20:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:44.176 20:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 82d4052a-dab5-4d97-bfc4-555a6bda3a41 00:10:44.438 20:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:44.438 [2024-07-22 20:17:56.349252] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.438 20:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:44.700 20:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3447940 00:10:44.700 20:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:44.700 20:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:44.700 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.643 20:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 82d4052a-dab5-4d97-bfc4-555a6bda3a41 MY_SNAPSHOT 00:10:45.903 20:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9995a167-e6dd-4e4f-a91e-5dc34b32fe78 00:10:45.903 20:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 82d4052a-dab5-4d97-bfc4-555a6bda3a41 30 00:10:46.164 20:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9995a167-e6dd-4e4f-a91e-5dc34b32fe78 MY_CLONE 00:10:46.164 20:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0d23b8a4-d3b3-4f66-acf9-939a22e3fc4f 00:10:46.164 20:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0d23b8a4-d3b3-4f66-acf9-939a22e3fc4f 00:10:46.736 20:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3447940 00:10:56.744 Initializing NVMe Controllers 00:10:56.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:56.744 Controller IO queue size 128, less than required. 00:10:56.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:56.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:56.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:56.744 Initialization complete. Launching workers. 00:10:56.744 ======================================================== 00:10:56.744 Latency(us) 00:10:56.744 Device Information : IOPS MiB/s Average min max 00:10:56.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16346.10 63.85 7832.74 445.97 83225.13 00:10:56.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11490.80 44.89 11140.21 4854.77 96540.14 00:10:56.744 ======================================================== 00:10:56.744 Total : 27836.90 108.74 9198.03 445.97 96540.14 00:10:56.744 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 82d4052a-dab5-4d97-bfc4-555a6bda3a41 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86ed00c3-ee3c-4ae4-8947-cc4fbbf6b120 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.744 rmmod nvme_tcp 00:10:56.744 rmmod nvme_fabrics 00:10:56.744 rmmod nvme_keyring 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3447554 ']' 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3447554 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3447554 ']' 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3447554 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3447554 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3447554' 00:10:56.744 killing process with pid 3447554 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3447554 00:10:56.744 20:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3447554 00:10:56.744 20:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.744 20:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.744 20:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.744 20:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.744 20:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.744 20:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.744 20:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.744 20:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.295 20:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:59.295 00:10:59.295 real 0m24.266s 00:10:59.295 user 1m6.019s 00:10:59.295 sys 0m7.805s 00:10:59.295 20:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.295 20:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:59.295 ************************************ 00:10:59.295 END TEST nvmf_lvol 00:10:59.295 ************************************ 00:10:59.295 20:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:59.295 20:18:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:59.295 20:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:59.295 20:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.295 20:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.295 ************************************ 00:10:59.295 START TEST nvmf_lvs_grow 00:10:59.295 ************************************ 00:10:59.295 20:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:59.295 * Looking for test storage... 00:10:59.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:59.295 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.296 20:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:05.893 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:05.893 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:05.893 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:05.893 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.893 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.894 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.155 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.155 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.155 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:06.155 20:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:06.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:11:06.155 00:11:06.155 --- 10.0.0.2 ping statistics --- 00:11:06.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.155 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:11:06.155 00:11:06.155 --- 10.0.0.1 ping statistics --- 00:11:06.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.155 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3455180 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3455180 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3455180 ']' 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.155 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:06.417 [2024-07-22 20:18:18.234638] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:06.417 [2024-07-22 20:18:18.234763] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.417 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.417 [2024-07-22 20:18:18.368024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.677 [2024-07-22 20:18:18.548496] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.677 [2024-07-22 20:18:18.548538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.677 [2024-07-22 20:18:18.548553] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.677 [2024-07-22 20:18:18.548562] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.677 [2024-07-22 20:18:18.548573] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.677 [2024-07-22 20:18:18.548599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.939 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.939 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:07.247 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:07.247 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:07.247 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:07.247 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.247 20:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:07.247 [2024-07-22 20:18:19.137855] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:07.247 ************************************ 00:11:07.247 START TEST lvs_grow_clean 00:11:07.247 ************************************ 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:07.247 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:07.506 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:07.506 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:07.768 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:07.768 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:07.768 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:07.768 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:07.768 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:07.768 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8046c40f-1cbb-4ee9-9ef2-f250888831df lvol 150 00:11:08.029 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3ec9acd7-086e-4f9b-967f-e0c1a62f2cb1 00:11:08.029 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:08.029 20:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:08.029 [2024-07-22 20:18:20.020583] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:08.029 [2024-07-22 20:18:20.020660] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:08.029 true 00:11:08.029 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:08.029 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:08.289 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:08.289 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:08.550 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3ec9acd7-086e-4f9b-967f-e0c1a62f2cb1 00:11:08.550 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:08.810 [2024-07-22 20:18:20.610499] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.810 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:08.810 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3455712 00:11:08.810 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:08.810 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3455712 /var/tmp/bdevperf.sock 00:11:08.810 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:08.810 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3455712 ']' 00:11:08.810 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:08.810 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.810 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:08.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:08.810 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.810 20:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:09.070 [2024-07-22 20:18:20.856568] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:09.070 [2024-07-22 20:18:20.856678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455712 ] 00:11:09.070 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.070 [2024-07-22 20:18:20.980507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.331 [2024-07-22 20:18:21.155887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.591 20:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.591 20:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:09.591 20:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:10.161 Nvme0n1 00:11:10.161 20:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:10.161 [ 00:11:10.161 { 00:11:10.161 "name": "Nvme0n1", 00:11:10.161 "aliases": [ 00:11:10.161 "3ec9acd7-086e-4f9b-967f-e0c1a62f2cb1" 00:11:10.161 ], 00:11:10.161 "product_name": "NVMe disk", 00:11:10.161 "block_size": 4096, 00:11:10.161 "num_blocks": 38912, 00:11:10.161 "uuid": "3ec9acd7-086e-4f9b-967f-e0c1a62f2cb1", 00:11:10.161 "assigned_rate_limits": { 00:11:10.161 "rw_ios_per_sec": 0, 00:11:10.161 "rw_mbytes_per_sec": 0, 00:11:10.161 "r_mbytes_per_sec": 0, 00:11:10.161 "w_mbytes_per_sec": 0 00:11:10.161 }, 00:11:10.161 "claimed": false, 00:11:10.161 "zoned": false, 00:11:10.161 "supported_io_types": { 00:11:10.161 "read": true, 00:11:10.161 "write": true, 00:11:10.161 "unmap": true, 00:11:10.161 "flush": true, 00:11:10.161 "reset": true, 00:11:10.161 "nvme_admin": true, 00:11:10.161 "nvme_io": true, 00:11:10.161 "nvme_io_md": false, 00:11:10.161 "write_zeroes": true, 00:11:10.161 "zcopy": false, 00:11:10.161 "get_zone_info": false, 00:11:10.161 "zone_management": false, 00:11:10.161 "zone_append": false, 00:11:10.161 "compare": true, 00:11:10.161 "compare_and_write": true, 00:11:10.161 "abort": true, 00:11:10.161 "seek_hole": false, 00:11:10.161 "seek_data": false, 00:11:10.161 "copy": true, 00:11:10.161 "nvme_iov_md": false 00:11:10.161 }, 00:11:10.161 "memory_domains": [ 00:11:10.161 { 00:11:10.161 "dma_device_id": "system", 00:11:10.161 "dma_device_type": 1 00:11:10.161 } 00:11:10.162 ], 00:11:10.162 "driver_specific": { 00:11:10.162 "nvme": [ 00:11:10.162 { 00:11:10.162 "trid": { 00:11:10.162 "trtype": "TCP", 00:11:10.162 "adrfam": "IPv4", 00:11:10.162 "traddr": "10.0.0.2", 00:11:10.162 "trsvcid": "4420", 00:11:10.162 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:10.162 }, 00:11:10.162 "ctrlr_data": { 00:11:10.162 "cntlid": 1, 00:11:10.162 "vendor_id": "0x8086", 00:11:10.162 "model_number": "SPDK bdev Controller", 00:11:10.162 "serial_number": "SPDK0", 00:11:10.162 "firmware_revision": "24.09", 00:11:10.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:10.162 "oacs": { 00:11:10.162 "security": 0, 00:11:10.162 "format": 0, 00:11:10.162 "firmware": 0, 00:11:10.162 "ns_manage": 0 00:11:10.162 }, 00:11:10.162 "multi_ctrlr": true, 00:11:10.162 "ana_reporting": false 00:11:10.162 }, 00:11:10.162 "vs": { 00:11:10.162 "nvme_version": "1.3" 00:11:10.162 }, 00:11:10.162 "ns_data": { 00:11:10.162 "id": 1, 00:11:10.162 "can_share": true 00:11:10.162 } 00:11:10.162 } 00:11:10.162 ], 00:11:10.162 "mp_policy": "active_passive" 00:11:10.162 } 00:11:10.162 } 00:11:10.162 ] 00:11:10.162 20:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3455912 00:11:10.162 20:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:10.162 20:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:10.422 Running I/O for 10 seconds... 00:11:11.380 Latency(us) 00:11:11.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:11.380 Nvme0n1 : 1.00 16252.00 63.48 0.00 0.00 0.00 0.00 0.00 00:11:11.380 =================================================================================================================== 00:11:11.380 Total : 16252.00 63.48 0.00 0.00 0.00 0.00 0.00 00:11:11.380 00:11:12.429 20:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:12.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:12.429 Nvme0n1 : 2.00 16347.00 63.86 0.00 0.00 0.00 0.00 0.00 00:11:12.429 =================================================================================================================== 00:11:12.429 Total : 16347.00 63.86 0.00 0.00 0.00 0.00 0.00 00:11:12.429 00:11:12.429 true 00:11:12.429 20:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:12.429 20:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:12.688 20:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:12.688 20:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:12.688 20:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3455912 00:11:13.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.256 Nvme0n1 : 3.00 16358.33 63.90 0.00 0.00 0.00 0.00 0.00 00:11:13.256 =================================================================================================================== 00:11:13.256 Total : 16358.33 63.90 0.00 0.00 0.00 0.00 0.00 00:11:13.256 00:11:14.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.639 Nvme0n1 : 4.00 16395.75 64.05 0.00 0.00 0.00 0.00 0.00 00:11:14.639 =================================================================================================================== 00:11:14.639 Total : 16395.75 64.05 0.00 0.00 0.00 0.00 0.00 00:11:14.639 00:11:15.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:15.579 Nvme0n1 : 5.00 16419.20 64.14 0.00 0.00 0.00 0.00 0.00 00:11:15.579 =================================================================================================================== 00:11:15.579 Total : 16419.20 64.14 0.00 0.00 0.00 0.00 0.00 00:11:15.579 00:11:16.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.519 Nvme0n1 : 6.00 16434.00 64.20 0.00 0.00 0.00 0.00 0.00 00:11:16.519 =================================================================================================================== 00:11:16.519 Total : 16434.00 64.20 0.00 0.00 0.00 0.00 0.00 00:11:16.519 00:11:17.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.460 Nvme0n1 : 7.00 16444.43 64.24 0.00 0.00 0.00 0.00 0.00 00:11:17.460 =================================================================================================================== 00:11:17.460 Total : 16444.43 64.24 0.00 0.00 0.00 0.00 0.00 00:11:17.460 00:11:18.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.400 Nvme0n1 : 8.00 16460.25 64.30 0.00 0.00 0.00 0.00 0.00 00:11:18.400 =================================================================================================================== 00:11:18.400 Total : 16460.25 64.30 0.00 0.00 0.00 0.00 0.00 00:11:18.400 00:11:19.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.342 Nvme0n1 : 9.00 16465.56 64.32 0.00 0.00 0.00 0.00 0.00 00:11:19.342 =================================================================================================================== 00:11:19.342 Total : 16465.56 64.32 0.00 0.00 0.00 0.00 0.00 00:11:19.342 00:11:20.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.284 Nvme0n1 : 10.00 16476.20 64.36 0.00 0.00 0.00 0.00 0.00 00:11:20.284 =================================================================================================================== 00:11:20.284 Total : 16476.20 64.36 0.00 0.00 0.00 0.00 0.00 00:11:20.284 00:11:20.284 00:11:20.284 Latency(us) 00:11:20.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.284 Nvme0n1 : 10.01 16478.32 64.37 0.00 0.00 7763.07 2498.56 13926.40 00:11:20.285 =================================================================================================================== 00:11:20.285 Total : 16478.32 64.37 0.00 0.00 7763.07 2498.56 13926.40 00:11:20.285 0 00:11:20.285 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3455712 00:11:20.285 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3455712 ']' 00:11:20.285 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3455712 00:11:20.285 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:20.285 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.285 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3455712 00:11:20.546 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:20.546 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:20.546 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3455712' 00:11:20.546 killing process with pid 3455712 00:11:20.546 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3455712 00:11:20.546 Received shutdown signal, test time was about 10.000000 seconds 00:11:20.546 00:11:20.546 Latency(us) 00:11:20.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.546 =================================================================================================================== 00:11:20.546 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:20.546 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3455712 00:11:21.118 20:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:21.119 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:21.378 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:21.378 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:21.378 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:21.378 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:21.378 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:21.638 [2024-07-22 20:18:33.495985] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:21.638 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:21.899 request: 00:11:21.899 { 00:11:21.899 "uuid": "8046c40f-1cbb-4ee9-9ef2-f250888831df", 00:11:21.899 "method": "bdev_lvol_get_lvstores", 00:11:21.899 "req_id": 1 00:11:21.899 } 00:11:21.899 Got JSON-RPC error response 00:11:21.899 response: 00:11:21.899 { 00:11:21.899 "code": -19, 00:11:21.899 "message": "No such device" 00:11:21.899 } 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:21.899 aio_bdev 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3ec9acd7-086e-4f9b-967f-e0c1a62f2cb1 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=3ec9acd7-086e-4f9b-967f-e0c1a62f2cb1 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:21.899 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:22.159 20:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3ec9acd7-086e-4f9b-967f-e0c1a62f2cb1 -t 2000 00:11:22.159 [ 00:11:22.159 { 00:11:22.159 "name": "3ec9acd7-086e-4f9b-967f-e0c1a62f2cb1", 00:11:22.159 "aliases": [ 00:11:22.159 "lvs/lvol" 00:11:22.159 ], 00:11:22.159 "product_name": "Logical Volume", 00:11:22.159 "block_size": 4096, 00:11:22.159 "num_blocks": 38912, 00:11:22.159 "uuid": "3ec9acd7-086e-4f9b-967f-e0c1a62f2cb1", 00:11:22.159 "assigned_rate_limits": { 00:11:22.159 "rw_ios_per_sec": 0, 00:11:22.159 "rw_mbytes_per_sec": 0, 00:11:22.159 "r_mbytes_per_sec": 0, 00:11:22.159 "w_mbytes_per_sec": 0 00:11:22.159 }, 00:11:22.159 "claimed": false, 00:11:22.159 "zoned": false, 00:11:22.159 "supported_io_types": { 00:11:22.159 "read": true, 00:11:22.159 "write": true, 00:11:22.159 "unmap": true, 00:11:22.159 "flush": false, 00:11:22.159 "reset": true, 00:11:22.159 "nvme_admin": false, 00:11:22.159 "nvme_io": false, 00:11:22.159 "nvme_io_md": false, 00:11:22.159 "write_zeroes": true, 00:11:22.159 "zcopy": false, 00:11:22.159 "get_zone_info": false, 00:11:22.159 "zone_management": false, 00:11:22.159 "zone_append": false, 00:11:22.159 "compare": false, 00:11:22.159 "compare_and_write": false, 00:11:22.159 "abort": false, 00:11:22.159 "seek_hole": true, 00:11:22.159 "seek_data": true, 00:11:22.159 "copy": false, 00:11:22.159 "nvme_iov_md": false 00:11:22.159 }, 00:11:22.159 "driver_specific": { 00:11:22.159 "lvol": { 00:11:22.159 "lvol_store_uuid": "8046c40f-1cbb-4ee9-9ef2-f250888831df", 00:11:22.159 "base_bdev": "aio_bdev", 00:11:22.159 "thin_provision": false, 00:11:22.159 "num_allocated_clusters": 38, 00:11:22.159 "snapshot": false, 00:11:22.159 "clone": false, 00:11:22.159 "esnap_clone": false 00:11:22.159 } 00:11:22.159 } 00:11:22.159 } 00:11:22.159 ] 00:11:22.159 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:22.159 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:22.159 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:22.420 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:22.420 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:22.420 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:22.681 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:22.681 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3ec9acd7-086e-4f9b-967f-e0c1a62f2cb1 00:11:22.681 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8046c40f-1cbb-4ee9-9ef2-f250888831df 00:11:22.941 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:22.941 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:23.202 00:11:23.202 real 0m15.797s 00:11:23.202 user 0m15.500s 00:11:23.202 sys 0m1.304s 00:11:23.202 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:23.202 20:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:23.202 ************************************ 00:11:23.202 END TEST lvs_grow_clean 00:11:23.202 ************************************ 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:23.202 ************************************ 00:11:23.202 START TEST lvs_grow_dirty 00:11:23.202 ************************************ 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:23.202 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:23.462 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:23.462 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:23.462 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:23.462 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:23.462 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:23.723 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:23.723 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:23.723 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de lvol 150 00:11:23.723 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8bf5bbd2-1469-4e33-b25d-4362a176f235 00:11:23.723 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:23.982 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:23.982 [2024-07-22 20:18:35.870955] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:23.982 [2024-07-22 20:18:35.871024] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:23.982 true 00:11:23.983 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:23.983 20:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:24.243 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:24.243 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:24.243 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8bf5bbd2-1469-4e33-b25d-4362a176f235 00:11:24.504 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:24.504 [2024-07-22 20:18:36.480923] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.504 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:24.765 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3458979 00:11:24.765 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:24.765 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:24.765 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3458979 /var/tmp/bdevperf.sock 00:11:24.765 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3458979 ']' 00:11:24.765 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:24.765 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.765 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:24.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:24.765 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.765 20:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:24.765 [2024-07-22 20:18:36.729911] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:24.765 [2024-07-22 20:18:36.730011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458979 ] 00:11:25.092 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.092 [2024-07-22 20:18:36.849921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.092 [2024-07-22 20:18:36.983944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.662 20:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.662 20:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:25.662 20:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:25.923 Nvme0n1 00:11:25.923 20:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:26.183 [ 00:11:26.183 { 00:11:26.183 "name": "Nvme0n1", 00:11:26.183 "aliases": [ 00:11:26.183 "8bf5bbd2-1469-4e33-b25d-4362a176f235" 00:11:26.183 ], 00:11:26.183 "product_name": "NVMe disk", 00:11:26.183 "block_size": 4096, 00:11:26.184 "num_blocks": 38912, 00:11:26.184 "uuid": "8bf5bbd2-1469-4e33-b25d-4362a176f235", 00:11:26.184 "assigned_rate_limits": { 00:11:26.184 "rw_ios_per_sec": 0, 00:11:26.184 "rw_mbytes_per_sec": 0, 00:11:26.184 "r_mbytes_per_sec": 0, 00:11:26.184 "w_mbytes_per_sec": 0 00:11:26.184 }, 00:11:26.184 "claimed": false, 00:11:26.184 "zoned": false, 00:11:26.184 "supported_io_types": { 00:11:26.184 "read": true, 00:11:26.184 "write": true, 00:11:26.184 "unmap": true, 00:11:26.184 "flush": true, 00:11:26.184 "reset": true, 00:11:26.184 "nvme_admin": true, 00:11:26.184 "nvme_io": true, 00:11:26.184 "nvme_io_md": false, 00:11:26.184 "write_zeroes": true, 00:11:26.184 "zcopy": false, 00:11:26.184 "get_zone_info": false, 00:11:26.184 "zone_management": false, 00:11:26.184 "zone_append": false, 00:11:26.184 "compare": true, 00:11:26.184 "compare_and_write": true, 00:11:26.184 "abort": true, 00:11:26.184 "seek_hole": false, 00:11:26.184 "seek_data": false, 00:11:26.184 "copy": true, 00:11:26.184 "nvme_iov_md": false 00:11:26.184 }, 00:11:26.184 "memory_domains": [ 00:11:26.184 { 00:11:26.184 "dma_device_id": "system", 00:11:26.184 "dma_device_type": 1 00:11:26.184 } 00:11:26.184 ], 00:11:26.184 "driver_specific": { 00:11:26.184 "nvme": [ 00:11:26.184 { 00:11:26.184 "trid": { 00:11:26.184 "trtype": "TCP", 00:11:26.184 "adrfam": "IPv4", 00:11:26.184 "traddr": "10.0.0.2", 00:11:26.184 "trsvcid": "4420", 00:11:26.184 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:26.184 }, 00:11:26.184 "ctrlr_data": { 00:11:26.184 "cntlid": 1, 00:11:26.184 "vendor_id": "0x8086", 00:11:26.184 "model_number": "SPDK bdev Controller", 00:11:26.184 "serial_number": "SPDK0", 00:11:26.184 "firmware_revision": "24.09", 00:11:26.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:26.184 "oacs": { 00:11:26.184 "security": 0, 00:11:26.184 "format": 0, 00:11:26.184 "firmware": 0, 00:11:26.184 "ns_manage": 0 00:11:26.184 }, 00:11:26.184 "multi_ctrlr": true, 00:11:26.184 "ana_reporting": false 00:11:26.184 }, 00:11:26.184 "vs": { 00:11:26.184 "nvme_version": "1.3" 00:11:26.184 }, 00:11:26.184 "ns_data": { 00:11:26.184 "id": 1, 00:11:26.184 "can_share": true 00:11:26.184 } 00:11:26.184 } 00:11:26.184 ], 00:11:26.184 "mp_policy": "active_passive" 00:11:26.184 } 00:11:26.184 } 00:11:26.184 ] 00:11:26.184 20:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3459222 00:11:26.184 20:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:26.184 20:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:26.184 Running I/O for 10 seconds... 00:11:27.126 Latency(us) 00:11:27.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.126 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:27.126 Nvme0n1 : 1.00 16199.00 63.28 0.00 0.00 0.00 0.00 0.00 00:11:27.126 =================================================================================================================== 00:11:27.126 Total : 16199.00 63.28 0.00 0.00 0.00 0.00 0.00 00:11:27.126 00:11:28.068 20:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:28.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:28.068 Nvme0n1 : 2.00 16323.50 63.76 0.00 0.00 0.00 0.00 0.00 00:11:28.068 =================================================================================================================== 00:11:28.068 Total : 16323.50 63.76 0.00 0.00 0.00 0.00 0.00 00:11:28.068 00:11:28.329 true 00:11:28.329 20:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:28.329 20:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:28.329 20:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:28.329 20:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:28.329 20:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3459222 00:11:29.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:29.271 Nvme0n1 : 3.00 16363.33 63.92 0.00 0.00 0.00 0.00 0.00 00:11:29.271 =================================================================================================================== 00:11:29.271 Total : 16363.33 63.92 0.00 0.00 0.00 0.00 0.00 00:11:29.271 00:11:30.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.212 Nvme0n1 : 4.00 16400.25 64.06 0.00 0.00 0.00 0.00 0.00 00:11:30.213 =================================================================================================================== 00:11:30.213 Total : 16400.25 64.06 0.00 0.00 0.00 0.00 0.00 00:11:30.213 00:11:31.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.242 Nvme0n1 : 5.00 16421.80 64.15 0.00 0.00 0.00 0.00 0.00 00:11:31.242 =================================================================================================================== 00:11:31.242 Total : 16421.80 64.15 0.00 0.00 0.00 0.00 0.00 00:11:31.242 00:11:32.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.184 Nvme0n1 : 6.00 16447.17 64.25 0.00 0.00 0.00 0.00 0.00 00:11:32.184 =================================================================================================================== 00:11:32.184 Total : 16447.17 64.25 0.00 0.00 0.00 0.00 0.00 00:11:32.184 00:11:33.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.139 Nvme0n1 : 7.00 16465.14 64.32 0.00 0.00 0.00 0.00 0.00 00:11:33.139 =================================================================================================================== 00:11:33.139 Total : 16465.14 64.32 0.00 0.00 0.00 0.00 0.00 00:11:33.139 00:11:34.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.081 Nvme0n1 : 8.00 16479.00 64.37 0.00 0.00 0.00 0.00 0.00 00:11:34.081 =================================================================================================================== 00:11:34.081 Total : 16479.00 64.37 0.00 0.00 0.00 0.00 0.00 00:11:34.081 00:11:35.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.466 Nvme0n1 : 9.00 16489.78 64.41 0.00 0.00 0.00 0.00 0.00 00:11:35.466 =================================================================================================================== 00:11:35.466 Total : 16489.78 64.41 0.00 0.00 0.00 0.00 0.00 00:11:35.466 00:11:36.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.409 Nvme0n1 : 10.00 16504.70 64.47 0.00 0.00 0.00 0.00 0.00 00:11:36.409 =================================================================================================================== 00:11:36.409 Total : 16504.70 64.47 0.00 0.00 0.00 0.00 0.00 00:11:36.409 00:11:36.409 00:11:36.409 Latency(us) 00:11:36.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.409 Nvme0n1 : 10.01 16504.89 64.47 0.00 0.00 7751.26 4778.67 15400.96 00:11:36.409 =================================================================================================================== 00:11:36.409 Total : 16504.89 64.47 0.00 0.00 7751.26 4778.67 15400.96 00:11:36.409 0 00:11:36.409 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3458979 00:11:36.409 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3458979 ']' 00:11:36.409 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3458979 00:11:36.409 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:36.409 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.409 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3458979 00:11:36.409 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:36.409 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:36.409 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3458979' 00:11:36.409 killing process with pid 3458979 00:11:36.409 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3458979 00:11:36.409 Received shutdown signal, test time was about 10.000000 seconds 00:11:36.409 00:11:36.409 Latency(us) 00:11:36.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.409 =================================================================================================================== 00:11:36.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:36.409 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3458979 00:11:36.670 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:36.930 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:37.191 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:37.191 20:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3455180 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3455180 00:11:37.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3455180 Killed "${NVMF_APP[@]}" "$@" 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3461352 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3461352 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3461352 ']' 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.191 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:37.452 [2024-07-22 20:18:49.250031] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:37.452 [2024-07-22 20:18:49.250139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.452 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.452 [2024-07-22 20:18:49.385538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.713 [2024-07-22 20:18:49.568921] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.713 [2024-07-22 20:18:49.568965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.713 [2024-07-22 20:18:49.568978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.713 [2024-07-22 20:18:49.568987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.713 [2024-07-22 20:18:49.569001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.713 [2024-07-22 20:18:49.569035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.973 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.973 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:37.973 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:37.973 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:37.973 20:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:38.234 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.234 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:38.234 [2024-07-22 20:18:50.164429] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:38.234 [2024-07-22 20:18:50.164575] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:38.234 [2024-07-22 20:18:50.164618] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:38.234 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:38.234 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8bf5bbd2-1469-4e33-b25d-4362a176f235 00:11:38.234 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8bf5bbd2-1469-4e33-b25d-4362a176f235 00:11:38.234 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:38.234 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:38.235 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:38.235 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:38.235 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:38.495 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8bf5bbd2-1469-4e33-b25d-4362a176f235 -t 2000 00:11:38.495 [ 00:11:38.495 { 00:11:38.495 "name": "8bf5bbd2-1469-4e33-b25d-4362a176f235", 00:11:38.495 "aliases": [ 00:11:38.495 "lvs/lvol" 00:11:38.495 ], 00:11:38.495 "product_name": "Logical Volume", 00:11:38.495 "block_size": 4096, 00:11:38.495 "num_blocks": 38912, 00:11:38.495 "uuid": "8bf5bbd2-1469-4e33-b25d-4362a176f235", 00:11:38.495 "assigned_rate_limits": { 00:11:38.495 "rw_ios_per_sec": 0, 00:11:38.495 "rw_mbytes_per_sec": 0, 00:11:38.495 "r_mbytes_per_sec": 0, 00:11:38.495 "w_mbytes_per_sec": 0 00:11:38.495 }, 00:11:38.495 "claimed": false, 00:11:38.495 "zoned": false, 00:11:38.495 "supported_io_types": { 00:11:38.495 "read": true, 00:11:38.495 "write": true, 00:11:38.495 "unmap": true, 00:11:38.495 "flush": false, 00:11:38.495 "reset": true, 00:11:38.495 "nvme_admin": false, 00:11:38.495 "nvme_io": false, 00:11:38.495 "nvme_io_md": false, 00:11:38.495 "write_zeroes": true, 00:11:38.495 "zcopy": false, 00:11:38.495 "get_zone_info": false, 00:11:38.495 "zone_management": false, 00:11:38.495 "zone_append": false, 00:11:38.495 "compare": false, 00:11:38.495 "compare_and_write": false, 00:11:38.495 "abort": false, 00:11:38.495 "seek_hole": true, 00:11:38.495 "seek_data": true, 00:11:38.495 "copy": false, 00:11:38.495 "nvme_iov_md": false 00:11:38.495 }, 00:11:38.495 "driver_specific": { 00:11:38.495 "lvol": { 00:11:38.495 "lvol_store_uuid": "3b01baac-fc60-44a4-9972-cf1bdb3fe1de", 00:11:38.495 "base_bdev": "aio_bdev", 00:11:38.495 "thin_provision": false, 00:11:38.495 "num_allocated_clusters": 38, 00:11:38.495 "snapshot": false, 00:11:38.495 "clone": false, 00:11:38.495 "esnap_clone": false 00:11:38.495 } 00:11:38.495 } 00:11:38.495 } 00:11:38.495 ] 00:11:38.495 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:38.495 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:38.495 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:38.756 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:38.756 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:38.756 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:39.016 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:39.016 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:39.016 [2024-07-22 20:18:50.964229] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:39.016 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:39.016 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:39.016 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:39.016 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:39.016 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:39.017 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:39.017 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:39.017 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:39.017 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:39.017 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:39.017 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:39.017 20:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:39.277 request: 00:11:39.277 { 00:11:39.277 "uuid": "3b01baac-fc60-44a4-9972-cf1bdb3fe1de", 00:11:39.277 "method": "bdev_lvol_get_lvstores", 00:11:39.277 "req_id": 1 00:11:39.277 } 00:11:39.277 Got JSON-RPC error response 00:11:39.277 response: 00:11:39.277 { 00:11:39.277 "code": -19, 00:11:39.277 "message": "No such device" 00:11:39.277 } 00:11:39.277 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:39.277 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:39.277 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:39.277 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:39.277 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:39.538 aio_bdev 00:11:39.538 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8bf5bbd2-1469-4e33-b25d-4362a176f235 00:11:39.538 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8bf5bbd2-1469-4e33-b25d-4362a176f235 00:11:39.538 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:39.538 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:39.538 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:39.538 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:39.538 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:39.538 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8bf5bbd2-1469-4e33-b25d-4362a176f235 -t 2000 00:11:39.799 [ 00:11:39.799 { 00:11:39.799 "name": "8bf5bbd2-1469-4e33-b25d-4362a176f235", 00:11:39.799 "aliases": [ 00:11:39.799 "lvs/lvol" 00:11:39.799 ], 00:11:39.799 "product_name": "Logical Volume", 00:11:39.799 "block_size": 4096, 00:11:39.799 "num_blocks": 38912, 00:11:39.799 "uuid": "8bf5bbd2-1469-4e33-b25d-4362a176f235", 00:11:39.799 "assigned_rate_limits": { 00:11:39.799 "rw_ios_per_sec": 0, 00:11:39.799 "rw_mbytes_per_sec": 0, 00:11:39.799 "r_mbytes_per_sec": 0, 00:11:39.799 "w_mbytes_per_sec": 0 00:11:39.799 }, 00:11:39.799 "claimed": false, 00:11:39.799 "zoned": false, 00:11:39.799 "supported_io_types": { 00:11:39.799 "read": true, 00:11:39.799 "write": true, 00:11:39.799 "unmap": true, 00:11:39.799 "flush": false, 00:11:39.799 "reset": true, 00:11:39.799 "nvme_admin": false, 00:11:39.799 "nvme_io": false, 00:11:39.799 "nvme_io_md": false, 00:11:39.799 "write_zeroes": true, 00:11:39.799 "zcopy": false, 00:11:39.799 "get_zone_info": false, 00:11:39.799 "zone_management": false, 00:11:39.799 "zone_append": false, 00:11:39.799 "compare": false, 00:11:39.799 "compare_and_write": false, 00:11:39.799 "abort": false, 00:11:39.799 "seek_hole": true, 00:11:39.799 "seek_data": true, 00:11:39.799 "copy": false, 00:11:39.799 "nvme_iov_md": false 00:11:39.799 }, 00:11:39.799 "driver_specific": { 00:11:39.799 "lvol": { 00:11:39.799 "lvol_store_uuid": "3b01baac-fc60-44a4-9972-cf1bdb3fe1de", 00:11:39.799 "base_bdev": "aio_bdev", 00:11:39.799 "thin_provision": false, 00:11:39.800 "num_allocated_clusters": 38, 00:11:39.800 "snapshot": false, 00:11:39.800 "clone": false, 00:11:39.800 "esnap_clone": false 00:11:39.800 } 00:11:39.800 } 00:11:39.800 } 00:11:39.800 ] 00:11:39.800 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:39.800 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:39.800 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:39.800 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:39.800 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:39.800 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:40.060 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:40.060 20:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8bf5bbd2-1469-4e33-b25d-4362a176f235 00:11:40.060 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b01baac-fc60-44a4-9972-cf1bdb3fe1de 00:11:40.324 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:40.586 00:11:40.586 real 0m17.347s 00:11:40.586 user 0m45.568s 00:11:40.586 sys 0m2.994s 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:40.586 ************************************ 00:11:40.586 END TEST lvs_grow_dirty 00:11:40.586 ************************************ 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:40.586 nvmf_trace.0 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:40.586 rmmod nvme_tcp 00:11:40.586 rmmod nvme_fabrics 00:11:40.586 rmmod nvme_keyring 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3461352 ']' 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3461352 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3461352 ']' 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3461352 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.586 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3461352 00:11:40.847 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:40.847 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:40.847 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3461352' 00:11:40.847 killing process with pid 3461352 00:11:40.847 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3461352 00:11:40.847 20:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3461352 00:11:41.789 20:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:41.789 20:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:41.789 20:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:41.789 20:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:41.789 20:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:41.789 20:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.789 20:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.789 20:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.703 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:43.703 00:11:43.703 real 0m44.652s 00:11:43.703 user 1m7.583s 00:11:43.703 sys 0m10.110s 00:11:43.703 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.703 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:43.703 ************************************ 00:11:43.703 END TEST nvmf_lvs_grow 00:11:43.703 ************************************ 00:11:43.703 20:18:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:43.703 20:18:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:43.703 20:18:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:43.703 20:18:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.703 20:18:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:43.703 ************************************ 00:11:43.703 START TEST nvmf_bdev_io_wait 00:11:43.703 ************************************ 00:11:43.703 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:43.968 * Looking for test storage... 00:11:43.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:11:43.968 20:18:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.166 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.166 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:11:52.166 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:52.166 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:52.166 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:52.166 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:52.166 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:52.166 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:11:52.166 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:52.166 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:11:52.166 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:52.167 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:52.167 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:52.167 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:52.167 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.167 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:52.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:11:52.168 00:11:52.168 --- 10.0.0.2 ping statistics --- 00:11:52.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.168 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:11:52.168 00:11:52.168 --- 10.0.0.1 ping statistics --- 00:11:52.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.168 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:52.168 20:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3466410 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3466410 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3466410 ']' 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.168 [2024-07-22 20:19:03.133987] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:52.168 [2024-07-22 20:19:03.134110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.168 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.168 [2024-07-22 20:19:03.272276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.168 [2024-07-22 20:19:03.462115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.168 [2024-07-22 20:19:03.462158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.168 [2024-07-22 20:19:03.462171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.168 [2024-07-22 20:19:03.462181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.168 [2024-07-22 20:19:03.462191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.168 [2024-07-22 20:19:03.462542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.168 [2024-07-22 20:19:03.462787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.168 [2024-07-22 20:19:03.462854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.168 [2024-07-22 20:19:03.462877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.168 20:19:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.168 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.168 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.168 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.168 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.169 [2024-07-22 20:19:04.085199] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.169 Malloc0 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.169 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.430 [2024-07-22 20:19:04.197875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3466762 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3466764 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:52.430 { 00:11:52.430 "params": { 00:11:52.430 "name": "Nvme$subsystem", 00:11:52.430 "trtype": "$TEST_TRANSPORT", 00:11:52.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.430 "adrfam": "ipv4", 00:11:52.430 "trsvcid": "$NVMF_PORT", 00:11:52.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.430 "hdgst": ${hdgst:-false}, 00:11:52.430 "ddgst": ${ddgst:-false} 00:11:52.430 }, 00:11:52.430 "method": "bdev_nvme_attach_controller" 00:11:52.430 } 00:11:52.430 EOF 00:11:52.430 )") 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3466766 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:52.430 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3466769 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:52.431 { 00:11:52.431 "params": { 00:11:52.431 "name": "Nvme$subsystem", 00:11:52.431 "trtype": "$TEST_TRANSPORT", 00:11:52.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.431 "adrfam": "ipv4", 00:11:52.431 "trsvcid": "$NVMF_PORT", 00:11:52.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.431 "hdgst": ${hdgst:-false}, 00:11:52.431 "ddgst": ${ddgst:-false} 00:11:52.431 }, 00:11:52.431 "method": "bdev_nvme_attach_controller" 00:11:52.431 } 00:11:52.431 EOF 00:11:52.431 )") 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:52.431 { 00:11:52.431 "params": { 00:11:52.431 "name": "Nvme$subsystem", 00:11:52.431 "trtype": "$TEST_TRANSPORT", 00:11:52.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.431 "adrfam": "ipv4", 00:11:52.431 "trsvcid": "$NVMF_PORT", 00:11:52.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.431 "hdgst": ${hdgst:-false}, 00:11:52.431 "ddgst": ${ddgst:-false} 00:11:52.431 }, 00:11:52.431 "method": "bdev_nvme_attach_controller" 00:11:52.431 } 00:11:52.431 EOF 00:11:52.431 )") 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:52.431 { 00:11:52.431 "params": { 00:11:52.431 "name": "Nvme$subsystem", 00:11:52.431 "trtype": "$TEST_TRANSPORT", 00:11:52.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.431 "adrfam": "ipv4", 00:11:52.431 "trsvcid": "$NVMF_PORT", 00:11:52.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.431 "hdgst": ${hdgst:-false}, 00:11:52.431 "ddgst": ${ddgst:-false} 00:11:52.431 }, 00:11:52.431 "method": "bdev_nvme_attach_controller" 00:11:52.431 } 00:11:52.431 EOF 00:11:52.431 )") 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3466762 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:52.431 "params": { 00:11:52.431 "name": "Nvme1", 00:11:52.431 "trtype": "tcp", 00:11:52.431 "traddr": "10.0.0.2", 00:11:52.431 "adrfam": "ipv4", 00:11:52.431 "trsvcid": "4420", 00:11:52.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.431 "hdgst": false, 00:11:52.431 "ddgst": false 00:11:52.431 }, 00:11:52.431 "method": "bdev_nvme_attach_controller" 00:11:52.431 }' 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:52.431 "params": { 00:11:52.431 "name": "Nvme1", 00:11:52.431 "trtype": "tcp", 00:11:52.431 "traddr": "10.0.0.2", 00:11:52.431 "adrfam": "ipv4", 00:11:52.431 "trsvcid": "4420", 00:11:52.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.431 "hdgst": false, 00:11:52.431 "ddgst": false 00:11:52.431 }, 00:11:52.431 "method": "bdev_nvme_attach_controller" 00:11:52.431 }' 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:52.431 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:52.431 "params": { 00:11:52.432 "name": "Nvme1", 00:11:52.432 "trtype": "tcp", 00:11:52.432 "traddr": "10.0.0.2", 00:11:52.432 "adrfam": "ipv4", 00:11:52.432 "trsvcid": "4420", 00:11:52.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.432 "hdgst": false, 00:11:52.432 "ddgst": false 00:11:52.432 }, 00:11:52.432 "method": "bdev_nvme_attach_controller" 00:11:52.432 }' 00:11:52.432 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:52.432 20:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:52.432 "params": { 00:11:52.432 "name": "Nvme1", 00:11:52.432 "trtype": "tcp", 00:11:52.432 "traddr": "10.0.0.2", 00:11:52.432 "adrfam": "ipv4", 00:11:52.432 "trsvcid": "4420", 00:11:52.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.432 "hdgst": false, 00:11:52.432 "ddgst": false 00:11:52.432 }, 00:11:52.432 "method": "bdev_nvme_attach_controller" 00:11:52.432 }' 00:11:52.432 [2024-07-22 20:19:04.275161] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:52.432 [2024-07-22 20:19:04.275316] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:52.432 [2024-07-22 20:19:04.278917] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:52.432 [2024-07-22 20:19:04.278923] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:52.432 [2024-07-22 20:19:04.279020] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:52.432 [2024-07-22 20:19:04.279023] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:52.432 [2024-07-22 20:19:04.281791] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:52.432 [2024-07-22 20:19:04.281890] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:52.432 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.432 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.432 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.432 [2024-07-22 20:19:04.414535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.695 [2024-07-22 20:19:04.469758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.695 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.695 [2024-07-22 20:19:04.532259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.695 [2024-07-22 20:19:04.582781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:52.695 [2024-07-22 20:19:04.595091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.695 [2024-07-22 20:19:04.641254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:11:52.695 [2024-07-22 20:19:04.703231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:52.956 [2024-07-22 20:19:04.772858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:52.956 Running I/O for 1 seconds... 00:11:53.216 Running I/O for 1 seconds... 00:11:53.216 Running I/O for 1 seconds... 00:11:53.216 Running I/O for 1 seconds... 00:11:54.158 00:11:54.158 Latency(us) 00:11:54.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.158 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:54.158 Nvme1n1 : 1.00 170244.92 665.02 0.00 0.00 748.94 307.20 866.99 00:11:54.158 =================================================================================================================== 00:11:54.158 Total : 170244.92 665.02 0.00 0.00 748.94 307.20 866.99 00:11:54.158 00:11:54.158 Latency(us) 00:11:54.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.158 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:54.158 Nvme1n1 : 1.01 7685.27 30.02 0.00 0.00 16534.57 5898.24 26978.99 00:11:54.158 =================================================================================================================== 00:11:54.158 Total : 7685.27 30.02 0.00 0.00 16534.57 5898.24 26978.99 00:11:54.158 00:11:54.158 Latency(us) 00:11:54.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.158 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:54.158 Nvme1n1 : 1.00 16484.75 64.39 0.00 0.00 7741.51 3877.55 14417.92 00:11:54.158 =================================================================================================================== 00:11:54.158 Total : 16484.75 64.39 0.00 0.00 7741.51 3877.55 14417.92 00:11:54.419 00:11:54.419 Latency(us) 00:11:54.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.419 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:54.419 Nvme1n1 : 1.01 7294.26 28.49 0.00 0.00 17491.92 6635.52 36044.80 00:11:54.419 =================================================================================================================== 00:11:54.419 Total : 7294.26 28.49 0.00 0.00 17491.92 6635.52 36044.80 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3466764 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3466766 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3466769 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.991 rmmod nvme_tcp 00:11:54.991 rmmod nvme_fabrics 00:11:54.991 rmmod nvme_keyring 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:54.991 20:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:54.991 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3466410 ']' 00:11:54.991 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3466410 00:11:54.991 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3466410 ']' 00:11:54.991 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3466410 00:11:54.991 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:11:54.991 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:54.991 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3466410 00:11:55.252 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:55.252 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:55.252 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3466410' 00:11:55.252 killing process with pid 3466410 00:11:55.252 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3466410 00:11:55.252 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3466410 00:11:55.823 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.823 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.823 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.823 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.823 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.823 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.823 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.823 20:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.372 20:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:58.372 00:11:58.372 real 0m14.275s 00:11:58.372 user 0m26.884s 00:11:58.372 sys 0m7.427s 00:11:58.372 20:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.372 20:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:58.372 ************************************ 00:11:58.372 END TEST nvmf_bdev_io_wait 00:11:58.372 ************************************ 00:11:58.372 20:19:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:58.372 20:19:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:58.372 20:19:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:58.372 20:19:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.372 20:19:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:58.372 ************************************ 00:11:58.372 START TEST nvmf_queue_depth 00:11:58.372 ************************************ 00:11:58.372 20:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:58.372 * Looking for test storage... 00:11:58.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:11:58.372 20:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:04.963 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:04.963 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:04.963 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:04.963 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.963 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.964 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.964 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.964 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.964 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.964 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.964 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.225 20:19:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:05.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:12:05.225 00:12:05.225 --- 10.0.0.2 ping statistics --- 00:12:05.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.225 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:12:05.225 00:12:05.225 --- 10.0.0.1 ping statistics --- 00:12:05.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.225 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3471536 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3471536 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3471536 ']' 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.225 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:05.225 [2024-07-22 20:19:17.182799] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:05.225 [2024-07-22 20:19:17.182931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.486 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.486 [2024-07-22 20:19:17.336870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.747 [2024-07-22 20:19:17.568719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.747 [2024-07-22 20:19:17.568785] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.747 [2024-07-22 20:19:17.568800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.747 [2024-07-22 20:19:17.568810] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.747 [2024-07-22 20:19:17.568822] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.747 [2024-07-22 20:19:17.568858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:06.007 [2024-07-22 20:19:17.991779] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.007 20:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:06.268 Malloc0 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:06.268 [2024-07-22 20:19:18.084579] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3471813 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3471813 /var/tmp/bdevperf.sock 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3471813 ']' 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:06.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.268 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:06.268 [2024-07-22 20:19:18.170565] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:06.268 [2024-07-22 20:19:18.170682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471813 ] 00:12:06.268 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.528 [2024-07-22 20:19:18.293209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.528 [2024-07-22 20:19:18.471007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.099 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.099 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:07.099 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:07.099 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.099 20:19:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:07.099 NVMe0n1 00:12:07.099 20:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.099 20:19:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:07.099 Running I/O for 10 seconds... 00:12:19.327 00:12:19.327 Latency(us) 00:12:19.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.327 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:19.327 Verification LBA range: start 0x0 length 0x4000 00:12:19.327 NVMe0n1 : 10.07 10341.08 40.39 0.00 0.00 98595.67 26105.17 83449.17 00:12:19.327 =================================================================================================================== 00:12:19.327 Total : 10341.08 40.39 0.00 0.00 98595.67 26105.17 83449.17 00:12:19.327 0 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3471813 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3471813 ']' 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3471813 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3471813 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3471813' 00:12:19.327 killing process with pid 3471813 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3471813 00:12:19.327 Received shutdown signal, test time was about 10.000000 seconds 00:12:19.327 00:12:19.327 Latency(us) 00:12:19.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.327 =================================================================================================================== 00:12:19.327 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3471813 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.327 20:19:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.327 rmmod nvme_tcp 00:12:19.327 rmmod nvme_fabrics 00:12:19.327 rmmod nvme_keyring 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3471536 ']' 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3471536 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3471536 ']' 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3471536 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3471536 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3471536' 00:12:19.327 killing process with pid 3471536 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3471536 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3471536 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.327 20:19:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.272 20:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:21.272 00:12:21.272 real 0m22.906s 00:12:21.272 user 0m27.000s 00:12:21.272 sys 0m6.553s 00:12:21.272 20:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.272 20:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.272 ************************************ 00:12:21.272 END TEST nvmf_queue_depth 00:12:21.272 ************************************ 00:12:21.272 20:19:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:21.272 20:19:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:21.272 20:19:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:21.272 20:19:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.272 20:19:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:21.272 ************************************ 00:12:21.272 START TEST nvmf_target_multipath 00:12:21.272 ************************************ 00:12:21.272 20:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:21.272 * Looking for test storage... 00:12:21.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.272 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.272 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:21.272 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.272 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.272 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.272 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.272 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.272 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.272 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.272 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:21.273 20:19:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:27.862 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:27.862 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.862 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:27.862 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:27.863 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.863 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.863 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.863 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.863 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.863 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.863 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.863 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:28.124 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.124 20:19:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.124 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.124 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.124 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.124 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:12:28.385 00:12:28.385 --- 10.0.0.2 ping statistics --- 00:12:28.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.385 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:12:28.385 00:12:28.385 --- 10.0.0.1 ping statistics --- 00:12:28.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.385 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:28.385 only one NIC for nvmf test 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.385 rmmod nvme_tcp 00:12:28.385 rmmod nvme_fabrics 00:12:28.385 rmmod nvme_keyring 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:28.385 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:28.386 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.386 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:28.386 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.386 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.386 20:19:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.931 00:12:30.931 real 0m9.438s 00:12:30.931 user 0m2.050s 00:12:30.931 sys 0m5.284s 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:30.931 ************************************ 00:12:30.931 END TEST nvmf_target_multipath 00:12:30.931 ************************************ 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:30.931 ************************************ 00:12:30.931 START TEST nvmf_zcopy 00:12:30.931 ************************************ 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:30.931 * Looking for test storage... 00:12:30.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.931 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.932 20:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:37.522 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.522 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:37.523 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:37.523 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:37.523 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:37.523 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:37.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:12:37.785 00:12:37.785 --- 10.0.0.2 ping statistics --- 00:12:37.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.785 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:12:37.785 00:12:37.785 --- 10.0.0.1 ping statistics --- 00:12:37.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.785 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3482480 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3482480 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3482480 ']' 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.785 20:19:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:37.785 [2024-07-22 20:19:49.771457] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:37.785 [2024-07-22 20:19:49.771564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.047 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.047 [2024-07-22 20:19:49.919510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.308 [2024-07-22 20:19:50.160529] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.308 [2024-07-22 20:19:50.160606] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.308 [2024-07-22 20:19:50.160623] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.308 [2024-07-22 20:19:50.160633] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.308 [2024-07-22 20:19:50.160647] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.308 [2024-07-22 20:19:50.160690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.570 [2024-07-22 20:19:50.571950] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.570 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.570 [2024-07-22 20:19:50.588252] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.832 malloc0 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:38.832 { 00:12:38.832 "params": { 00:12:38.832 "name": "Nvme$subsystem", 00:12:38.832 "trtype": "$TEST_TRANSPORT", 00:12:38.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:38.832 "adrfam": "ipv4", 00:12:38.832 "trsvcid": "$NVMF_PORT", 00:12:38.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:38.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:38.832 "hdgst": ${hdgst:-false}, 00:12:38.832 "ddgst": ${ddgst:-false} 00:12:38.832 }, 00:12:38.832 "method": "bdev_nvme_attach_controller" 00:12:38.832 } 00:12:38.832 EOF 00:12:38.832 )") 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:38.832 20:19:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:38.832 "params": { 00:12:38.832 "name": "Nvme1", 00:12:38.832 "trtype": "tcp", 00:12:38.832 "traddr": "10.0.0.2", 00:12:38.832 "adrfam": "ipv4", 00:12:38.832 "trsvcid": "4420", 00:12:38.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:38.832 "hdgst": false, 00:12:38.832 "ddgst": false 00:12:38.832 }, 00:12:38.832 "method": "bdev_nvme_attach_controller" 00:12:38.832 }' 00:12:38.832 [2024-07-22 20:19:50.743951] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:38.832 [2024-07-22 20:19:50.744076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482829 ] 00:12:38.832 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.092 [2024-07-22 20:19:50.869379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.092 [2024-07-22 20:19:51.052665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.663 Running I/O for 10 seconds... 00:12:49.662 00:12:49.662 Latency(us) 00:12:49.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.662 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:49.662 Verification LBA range: start 0x0 length 0x1000 00:12:49.662 Nvme1n1 : 10.01 8649.20 67.57 0.00 0.00 14742.36 1911.47 28835.84 00:12:49.662 =================================================================================================================== 00:12:49.662 Total : 8649.20 67.57 0.00 0.00 14742.36 1911.47 28835.84 00:12:50.603 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3485016 00:12:50.603 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:50.603 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.603 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:50.604 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:50.604 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:50.604 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:50.604 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:50.604 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:50.604 { 00:12:50.604 "params": { 00:12:50.604 "name": "Nvme$subsystem", 00:12:50.604 "trtype": "$TEST_TRANSPORT", 00:12:50.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.604 "adrfam": "ipv4", 00:12:50.604 "trsvcid": "$NVMF_PORT", 00:12:50.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.604 "hdgst": ${hdgst:-false}, 00:12:50.604 "ddgst": ${ddgst:-false} 00:12:50.604 }, 00:12:50.604 "method": "bdev_nvme_attach_controller" 00:12:50.604 } 00:12:50.604 EOF 00:12:50.604 )") 00:12:50.604 [2024-07-22 20:20:02.264666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.264708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:50.604 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:50.604 [2024-07-22 20:20:02.272638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.272661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:50.604 20:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:50.604 "params": { 00:12:50.604 "name": "Nvme1", 00:12:50.604 "trtype": "tcp", 00:12:50.604 "traddr": "10.0.0.2", 00:12:50.604 "adrfam": "ipv4", 00:12:50.604 "trsvcid": "4420", 00:12:50.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:50.604 "hdgst": false, 00:12:50.604 "ddgst": false 00:12:50.604 }, 00:12:50.604 "method": "bdev_nvme_attach_controller" 00:12:50.604 }' 00:12:50.604 [2024-07-22 20:20:02.280641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.280661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.288654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.288670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.296666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.296683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.304693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.304710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.312712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.312728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.320724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.320740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.328757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.328773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.334376] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:50.604 [2024-07-22 20:20:02.334475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3485016 ] 00:12:50.604 [2024-07-22 20:20:02.336766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.336782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.344800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.344816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.352816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.352832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.360830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.360846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.368867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.368883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.376879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.376895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.384891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.384907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.392918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.392935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.604 [2024-07-22 20:20:02.400930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.400946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.408965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.408982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.416986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.417002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.424992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.425008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.433027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.433043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.441042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.441058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.443705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.604 [2024-07-22 20:20:02.449059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.449075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.457091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.457107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.465112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.465128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.473128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.473144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.481148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.481163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.489158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.489173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.497194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.497217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.505216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.505232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.513227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.513243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.521255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.521271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.529277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.529295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.537302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.537319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.545321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.545337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.553331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.553347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.561368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.561385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.569385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.569400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.577397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.604 [2024-07-22 20:20:02.577412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.604 [2024-07-22 20:20:02.585423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.605 [2024-07-22 20:20:02.585439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.605 [2024-07-22 20:20:02.593437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.605 [2024-07-22 20:20:02.593452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.605 [2024-07-22 20:20:02.601465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.605 [2024-07-22 20:20:02.601481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.605 [2024-07-22 20:20:02.609485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.605 [2024-07-22 20:20:02.609500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.605 [2024-07-22 20:20:02.617497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.605 [2024-07-22 20:20:02.617512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.605 [2024-07-22 20:20:02.620484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.866 [2024-07-22 20:20:02.625529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.625548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.633553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.633569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.641564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.641579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.649596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.649612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.657617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.657633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.665638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.665654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.673656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.673672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.681669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.681685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.689707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.689723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.697725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.697741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.705730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.705746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.713762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.713778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.721779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.721795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.729802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.729817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.737820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.737835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.745831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.745847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.753874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.753889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.761883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.761899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.769900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.769915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.777933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.777952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.785946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.785963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.793975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.793991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.801992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.802008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.810003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.810019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.818033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.818049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.826054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.826069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.834063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.834078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.842095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.842111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.850117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.850133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.858138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.858154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.866170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.866188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.874174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.874191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.866 [2024-07-22 20:20:02.882208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.866 [2024-07-22 20:20:02.882225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.890231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.890248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.898246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.898263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.906273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.906289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.914283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.914299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.922317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.922333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.930339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.930357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.938347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.938363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.946391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.946408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.954397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.954413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.962410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.962426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.970441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.970457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.978454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.978469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.986488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.986504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:02.994510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:02.994526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.002519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.002535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.010560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.010576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.018587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.018603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.026582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.026598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.034614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.034630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.042640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.042656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.050661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.050679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.058679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.058695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 Running I/O for 5 seconds... 00:12:51.127 [2024-07-22 20:20:03.070993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.071015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.078184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.078212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.087780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.087799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.097116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.097136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.105753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.105771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.114129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.114147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.122590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.127 [2024-07-22 20:20:03.122609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.127 [2024-07-22 20:20:03.131448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.128 [2024-07-22 20:20:03.131467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.128 [2024-07-22 20:20:03.141005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.128 [2024-07-22 20:20:03.141023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.149553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.388 [2024-07-22 20:20:03.149572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.158339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.388 [2024-07-22 20:20:03.158357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.167406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.388 [2024-07-22 20:20:03.167425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.176050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.388 [2024-07-22 20:20:03.176069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.185310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.388 [2024-07-22 20:20:03.185328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.194442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.388 [2024-07-22 20:20:03.194460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.203001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.388 [2024-07-22 20:20:03.203019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.211540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.388 [2024-07-22 20:20:03.211558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.220388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.388 [2024-07-22 20:20:03.220406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.229180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.388 [2024-07-22 20:20:03.229199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.238351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.388 [2024-07-22 20:20:03.238370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.388 [2024-07-22 20:20:03.247332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.247350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.256759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.256779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.266025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.266044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.275544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.275563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.283942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.283961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.292979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.292998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.302281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.302302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.311253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.311272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.320377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.320396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.328755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.328773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.336911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.336930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.345861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.345879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.354640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.354659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.363705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.363724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.370437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.370454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.381269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.381287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.389960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.389978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.398914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.398933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.389 [2024-07-22 20:20:03.408120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.389 [2024-07-22 20:20:03.408138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.416688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.416709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.425064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.425082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.434464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.434482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.442809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.442827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.452234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.452252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.461506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.461524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.470821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.470846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.480178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.480196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.488994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.489012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.497336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.497355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.506042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.506061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.515378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.515397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.524774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.524792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.533715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.533733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.543148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.543166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.552032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.552053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.560622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.560641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.569917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.569935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.579541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.579560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.588654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.588677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.597606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.597625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.606231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.606250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.615249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.615268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.624089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.624108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.633405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.633425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.642219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.642238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.651039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.651057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.659831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.659849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.650 [2024-07-22 20:20:03.669046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.650 [2024-07-22 20:20:03.669064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.678086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.959 [2024-07-22 20:20:03.678106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.687515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.959 [2024-07-22 20:20:03.687533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.695869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.959 [2024-07-22 20:20:03.695888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.704246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.959 [2024-07-22 20:20:03.704265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.712921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.959 [2024-07-22 20:20:03.712939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.721477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.959 [2024-07-22 20:20:03.721496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.730126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.959 [2024-07-22 20:20:03.730145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.739259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.959 [2024-07-22 20:20:03.739278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.748634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.959 [2024-07-22 20:20:03.748653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.757595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.959 [2024-07-22 20:20:03.757616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.766641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.959 [2024-07-22 20:20:03.766659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.959 [2024-07-22 20:20:03.775723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.775741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.784742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.784761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.793977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.793997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.803762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.803785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.812252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.812272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.821261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.821280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.830505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.830524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.838959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.838978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.848253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.848271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.857013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.857032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.866134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.866152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.875519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.875538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.884162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.884180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.893102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.893121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.902008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.902026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.911445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.911464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.920627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.920646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.929124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.929147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.938400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.938419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.947313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.947333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.956285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.956304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.965609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.965627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.960 [2024-07-22 20:20:03.973797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.960 [2024-07-22 20:20:03.973815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:03.982317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:03.982336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:03.991004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:03.991022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.000000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.000019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.008799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.008817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.017785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.017804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.026796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.026815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.035418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.035436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.044326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.044350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.053331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.053352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.061720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.061739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.070046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.070065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.079224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.079243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.088261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.088279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.221 [2024-07-22 20:20:04.097056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.221 [2024-07-22 20:20:04.097079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.105193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.105217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.113514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.113533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.121646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.121665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.130405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.130426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.139637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.139656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.148302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.148320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.155141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.155159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.165835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.165854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.174392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.174410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.182742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.182761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.191595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.191613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.199789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.199807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.209242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.209260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.218582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.218600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.227424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.227442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.222 [2024-07-22 20:20:04.236513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.222 [2024-07-22 20:20:04.236532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.245065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.245084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.254736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.254755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.263593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.263612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.272393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.272412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.281386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.281404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.290501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.290520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.299367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.299385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.307820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.307841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.316823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.316843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.325179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.325198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.334211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.334230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.343251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.343270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.352013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.483 [2024-07-22 20:20:04.352032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.483 [2024-07-22 20:20:04.360643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.360662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.369920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.369938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.379118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.379136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.387823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.387841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.396653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.396672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.405589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.405607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.414665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.414683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.424253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.424271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.432905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.432924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.441585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.441603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.451304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.451322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.459535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.459553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.468937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.468955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.478020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.478038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.486807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.486826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.495104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.495123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.484 [2024-07-22 20:20:04.503422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.484 [2024-07-22 20:20:04.503440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.512497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.512515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.521095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.521112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.530266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.530284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.539310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.539328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.548318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.548337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.557288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.557306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.566354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.566375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.574956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.574975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.583847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.583867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.592302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.592321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.601032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.601051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.609497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.609521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.619029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.619048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.627796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.627814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.637432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.637450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.646113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.646131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.655431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.655449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.664800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.664818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.673975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.673993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.682348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.682366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.691445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.691463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.700037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.700055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.709070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.709088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.717582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.717600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.726332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.726350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.735323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.735341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.743701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.743719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.752241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-07-22 20:20:04.752258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-07-22 20:20:04.760528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-07-22 20:20:04.760550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.769996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.770014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.779638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.779656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.788532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.788550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.797372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.797391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.806117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.806136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.815131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.815150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.824288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.824307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.832865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.832883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.841904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.841922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.850419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.850436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.858744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.858763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.006 [2024-07-22 20:20:04.867769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.006 [2024-07-22 20:20:04.867788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.876580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.876598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.885655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.885673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.894656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.894674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.903168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.903186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.911612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.911630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.920142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.920160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.928811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.928832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.937702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.937721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.946253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.946270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.955063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.955081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.964014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.964033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.973413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.973431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.982075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.982094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.990718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.990736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:04.999843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:04.999864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:05.008448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:05.008466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:05.016935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:05.016953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.007 [2024-07-22 20:20:05.026226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.007 [2024-07-22 20:20:05.026244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.035213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.035232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.043687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.043706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.052599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.052617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.061168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.061186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.069660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.069678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.078115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.078134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.086433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.086451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.095854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.095876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.103947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.103965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.112821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.112840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.121741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.121759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.130265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.130284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.139209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.139227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.147325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.147343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.156269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.156287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.165210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.165227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.174131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.174155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.182696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.182715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.191040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.191058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.199841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.199859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.208192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.208216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.216822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.216841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.225668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.225686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.235080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.235098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.243927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.243946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.252478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.252496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.261320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.261342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.270499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.270518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.279352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.279371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.268 [2024-07-22 20:20:05.288761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.268 [2024-07-22 20:20:05.288780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.297728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.297747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.306497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.306515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.315658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.315676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.324697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.324716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.333733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.333753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.342401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.342420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.351056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.351077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.360058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.360077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.369687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.369706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.378523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.378541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.387932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.387950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.396087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.396105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.405286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.405304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.414290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.414309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.423373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.423391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.432753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.432775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.441668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.441686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.450585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.450605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.459612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.459630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.468124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.468142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.476510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.476529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.485853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.485871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.494866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.494884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.503769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.503787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.512271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.512290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.521251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.521269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.530296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.530315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.539636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.539654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.530 [2024-07-22 20:20:05.548607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.530 [2024-07-22 20:20:05.548627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.557094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.557114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.565749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.565767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.575065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.575084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.583368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.583388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.592530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.592548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.601465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.601483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.610440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.610458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.619079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.619098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.628345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.628364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.637306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.637325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.646260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.646278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.655138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.655157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.663556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.663574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.671781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.671801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.680007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.680026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.689297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.689315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.698635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.698653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.707144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.707162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.716079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.716098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.724800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.724819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.733524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.733542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.742264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.742288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.750822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.750840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.759863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.759882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.769084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.769103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.777426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.777445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.785778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.785797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.795142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.795161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.804444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.804462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.792 [2024-07-22 20:20:05.813001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.792 [2024-07-22 20:20:05.813020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.821509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.821528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.829885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.829904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.839063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.839083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.847631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.847650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.856242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.856262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.864992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.865011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.873779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.873798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.882793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.882812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.891666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.891684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.900367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.900386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.909463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.909482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.918136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.918155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.927139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.927157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.936235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.936254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.944622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.944641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.953148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.953168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.962205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.962225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.970767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.970785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.979186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.979210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.987741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.987760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:05.996278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:05.996296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:06.005736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:06.005755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:06.014552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:06.014571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:06.024124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:06.024144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:06.032502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:06.032520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:06.041004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:06.041023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:06.050110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:06.050129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:06.059068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:06.059086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.054 [2024-07-22 20:20:06.068084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.054 [2024-07-22 20:20:06.068103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.316 [2024-07-22 20:20:06.076762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.316 [2024-07-22 20:20:06.076780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.316 [2024-07-22 20:20:06.084960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.316 [2024-07-22 20:20:06.084978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.316 [2024-07-22 20:20:06.094106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.316 [2024-07-22 20:20:06.094129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.316 [2024-07-22 20:20:06.102398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.316 [2024-07-22 20:20:06.102417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.316 [2024-07-22 20:20:06.111040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.316 [2024-07-22 20:20:06.111058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.316 [2024-07-22 20:20:06.119958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.316 [2024-07-22 20:20:06.119978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.316 [2024-07-22 20:20:06.128343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.316 [2024-07-22 20:20:06.128361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.316 [2024-07-22 20:20:06.136725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.316 [2024-07-22 20:20:06.136742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.316 [2024-07-22 20:20:06.145689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.316 [2024-07-22 20:20:06.145707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.316 [2024-07-22 20:20:06.154157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.316 [2024-07-22 20:20:06.154175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.316 [2024-07-22 20:20:06.162925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.162944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.172007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.172025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.186253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.186272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.195044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.195063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.204171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.204189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.212955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.212974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.221732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.221750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.230813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.230831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.239180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.239198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.247995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.248014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.256496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.256514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.265120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.265142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.274102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.274121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.282959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.282977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.292035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.292054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.301257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.301275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.309678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.309703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.318532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.318551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.327560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.327578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.317 [2024-07-22 20:20:06.336623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.317 [2024-07-22 20:20:06.336642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.345904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.345924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.354918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.354936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.364170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.364188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.372635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.372653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.381555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.381573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.390799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.390817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.399383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.399401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.408599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.408617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.417032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.417049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.426177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.426196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.434735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.434757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.443585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.443603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.452442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.452460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.461148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.461167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.470538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.470556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.478836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.478854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.488357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.488375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.497103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.497122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.506416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.506434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.515295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.515313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.524403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.524422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.533671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.533689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.542545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.542563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.551972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.551990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.560981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.560999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.569519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.569537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.578570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.578590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.579 [2024-07-22 20:20:06.587653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.579 [2024-07-22 20:20:06.587671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.580 [2024-07-22 20:20:06.596077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.580 [2024-07-22 20:20:06.596098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.604839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.604863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.613449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.613468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.622345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.622363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.631245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.631263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.640105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.640123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.648902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.648920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.658213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.658230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.667083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.667101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.675776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.675794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.684836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.684855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.693717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.693736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.702740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.702758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.711184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.711207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.841 [2024-07-22 20:20:06.720555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.841 [2024-07-22 20:20:06.720574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.729548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.729566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.737836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.737854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.747221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.747239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.756103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.756121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.765324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.765341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.774383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.774405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.783080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.783099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.791988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.792006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.801361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.801379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.810229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.810247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.817062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.817080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.826861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.826880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.835536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.835555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.844026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.844045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.853377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.853396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.842 [2024-07-22 20:20:06.862283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.842 [2024-07-22 20:20:06.862302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.870840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.870859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.879760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.879785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.888181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.888205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.897374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.897392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.905963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.905982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.914553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.914572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.923824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.923843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.932523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.932542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.941244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.941263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.950650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.950668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.959094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.959111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.967430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.967448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.976831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.976849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.985183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.985209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:06.993545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:06.993564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:07.002698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:07.002716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:07.011734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:07.011752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:07.020574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:07.020592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:07.029616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:07.029635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:07.038493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:07.038512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:07.047572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:07.047591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.104 [2024-07-22 20:20:07.056775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.104 [2024-07-22 20:20:07.056794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.105 [2024-07-22 20:20:07.066155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.105 [2024-07-22 20:20:07.066174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.105 [2024-07-22 20:20:07.074966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.105 [2024-07-22 20:20:07.074984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.105 [2024-07-22 20:20:07.083906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.105 [2024-07-22 20:20:07.083924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.105 [2024-07-22 20:20:07.093554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.105 [2024-07-22 20:20:07.093571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.105 [2024-07-22 20:20:07.102165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.105 [2024-07-22 20:20:07.102183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.105 [2024-07-22 20:20:07.111290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.105 [2024-07-22 20:20:07.111312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.105 [2024-07-22 20:20:07.120207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.105 [2024-07-22 20:20:07.120227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.128733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.128753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.137178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.137196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.146052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.146071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.154460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.154479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.163467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.163487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.172359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.172377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.181345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.181364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.190321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.190340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.199281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.199300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.208529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.208548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.217197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.217222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.225990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.226009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.234638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.234656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.243754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.243773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.252255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.252274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.260849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.260867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.270322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.270341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.279227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.279246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.287746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.287765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.296658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.296676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.306116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.306136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.314226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.314244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.322693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.322712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.331439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.331457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.340316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.340335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.349241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.349259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.357484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.357503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.366385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.366405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.374728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.374747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.366 [2024-07-22 20:20:07.383792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.366 [2024-07-22 20:20:07.383810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.627 [2024-07-22 20:20:07.392346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.627 [2024-07-22 20:20:07.392365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.627 [2024-07-22 20:20:07.401317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.627 [2024-07-22 20:20:07.401336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.627 [2024-07-22 20:20:07.410540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.627 [2024-07-22 20:20:07.410559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.627 [2024-07-22 20:20:07.418892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.627 [2024-07-22 20:20:07.418911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.627 [2024-07-22 20:20:07.427447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.627 [2024-07-22 20:20:07.427465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.627 [2024-07-22 20:20:07.436636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.627 [2024-07-22 20:20:07.436654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.627 [2024-07-22 20:20:07.445116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.627 [2024-07-22 20:20:07.445140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.454586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.454605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.463132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.463152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.472232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.472251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.481798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.481818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.490367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.490386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.498911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.498930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.505716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.505733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.515760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.515778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.524514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.524532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.533189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.533215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.542372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.542390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.550786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.550805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.557628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.557645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.568472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.568491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.577327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.577346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.586221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.586240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.595165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.595184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.604596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.604618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.613434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.613453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.622049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.622068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.631704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.631723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.639793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.639812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.628 [2024-07-22 20:20:07.648769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.628 [2024-07-22 20:20:07.648787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.657373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.657391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.666553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.666571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.674967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.674985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.683376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.683394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.692828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.692847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.701101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.701119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.709935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.709953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.719015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.719033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.727790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.727808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.736353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.736371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.745664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.745682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.754697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.754716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.764020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.764039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.773462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.773484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.782043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.782062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.790069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.790087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.799086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.799105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.808036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.808054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.817258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.817276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.826196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.826222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.835089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.835108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.843871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.843889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.852338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.852356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.861216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.861235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.869749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.869768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.878837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.878858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.887118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.887137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.895800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.895819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.889 [2024-07-22 20:20:07.904655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.889 [2024-07-22 20:20:07.904674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:07.913286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:07.913305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:07.922552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:07.922570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:07.931166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:07.931184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:07.939995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:07.940017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:07.949692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:07.949710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:07.958043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:07.958061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:07.966943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:07.966961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:07.975892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:07.975911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:07.984758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:07.984776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:07.994187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:07.994210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.002959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.002978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.011590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.011615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.020567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.020586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.029414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.029432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.037726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.037745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.046769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.046787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.055185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.055207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.064921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.064939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.073343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.073361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.079561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.079579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 00:12:56.150 Latency(us) 00:12:56.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.150 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:56.150 Nvme1n1 : 5.01 17297.25 135.13 0.00 0.00 7392.31 2648.75 16820.91 00:12:56.150 =================================================================================================================== 00:12:56.150 Total : 17297.25 135.13 0.00 0.00 7392.31 2648.75 16820.91 00:12:56.150 [2024-07-22 20:20:08.087591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.087609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.095607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.095625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.103615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.103631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.111649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.111666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.119676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.119694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.127682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.127700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.135710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.135726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.143733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.143749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.151760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.151776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.159772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.159787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.150 [2024-07-22 20:20:08.167782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.150 [2024-07-22 20:20:08.167798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.175817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.175834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.183834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.183850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.191845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.191861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.199875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.199891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.207885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.207900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.215914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.215930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.223933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.223949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.231947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.231962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.239984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.239999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.247998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.248013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.256014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.256030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.264041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.264057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.272052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.272068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.280085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.280101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.288101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.288117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.296111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.296126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.304147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.304163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.312161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.312176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.320178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.320194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.328218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.328235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.336235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.336251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.344247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.344263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.352269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.352285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.360288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.360303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.368314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.368330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.376332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.376347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.384345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.384362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.392378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.392393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.400385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.400401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.408416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.408432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.416433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.416449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.424447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.424462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.412 [2024-07-22 20:20:08.432485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.412 [2024-07-22 20:20:08.432500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.440497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.440514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.448511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.448526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.456539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.456555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.464549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.464565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.472580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.472596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.480603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.480619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.488612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.488627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.496648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.496664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.504665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.504680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.512682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.512697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.520711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.520727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.528735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.528755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.536752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.536768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.544770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.544785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.552801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.552817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.560818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.560833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.568836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.568852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.576848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.576864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.584878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.584894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.592892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.592908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.600924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.600939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.608940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.608956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.616953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.616969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.624995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.625012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.633008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.633024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.641023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.673 [2024-07-22 20:20:08.641040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.673 [2024-07-22 20:20:08.649052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.674 [2024-07-22 20:20:08.649067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.674 [2024-07-22 20:20:08.657061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.674 [2024-07-22 20:20:08.657076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.674 [2024-07-22 20:20:08.665088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.674 [2024-07-22 20:20:08.665104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.674 [2024-07-22 20:20:08.673112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.674 [2024-07-22 20:20:08.673128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.674 [2024-07-22 20:20:08.681125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.674 [2024-07-22 20:20:08.681144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.674 [2024-07-22 20:20:08.689151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.674 [2024-07-22 20:20:08.689167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.935 [2024-07-22 20:20:08.697175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.935 [2024-07-22 20:20:08.697191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.935 [2024-07-22 20:20:08.705186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.935 [2024-07-22 20:20:08.705208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.935 [2024-07-22 20:20:08.713217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.935 [2024-07-22 20:20:08.713233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.935 [2024-07-22 20:20:08.721240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.935 [2024-07-22 20:20:08.721256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.935 [2024-07-22 20:20:08.729258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.935 [2024-07-22 20:20:08.729273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.935 [2024-07-22 20:20:08.737280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.935 [2024-07-22 20:20:08.737296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.935 [2024-07-22 20:20:08.745293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.935 [2024-07-22 20:20:08.745308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.935 [2024-07-22 20:20:08.753327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.935 [2024-07-22 20:20:08.753344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.935 [2024-07-22 20:20:08.761347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.935 [2024-07-22 20:20:08.761362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.935 [2024-07-22 20:20:08.769358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.935 [2024-07-22 20:20:08.769374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.935 [2024-07-22 20:20:08.777392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.935 [2024-07-22 20:20:08.777408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3485016) - No such process 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3485016 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:56.936 delay0 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.936 20:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:56.936 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.936 [2024-07-22 20:20:08.951048] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:03.521 Initializing NVMe Controllers 00:13:03.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:03.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:03.521 Initialization complete. Launching workers. 00:13:03.521 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1018 00:13:03.521 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1287, failed to submit 51 00:13:03.521 success 1132, unsuccess 155, failed 0 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.521 rmmod nvme_tcp 00:13:03.521 rmmod nvme_fabrics 00:13:03.521 rmmod nvme_keyring 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3482480 ']' 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3482480 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3482480 ']' 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3482480 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3482480 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3482480' 00:13:03.521 killing process with pid 3482480 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3482480 00:13:03.521 20:20:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3482480 00:13:04.093 20:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.093 20:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:04.093 20:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:04.093 20:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.093 20:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.093 20:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.093 20:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.093 20:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.639 00:13:06.639 real 0m35.659s 00:13:06.639 user 0m49.799s 00:13:06.639 sys 0m10.483s 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:06.639 ************************************ 00:13:06.639 END TEST nvmf_zcopy 00:13:06.639 ************************************ 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:06.639 ************************************ 00:13:06.639 START TEST nvmf_nmic 00:13:06.639 ************************************ 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:06.639 * Looking for test storage... 00:13:06.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:06.639 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:06.640 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.640 20:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:13.233 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:13.234 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:13.234 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:13.234 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:13.234 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:13.234 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.495 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.495 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.495 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:13.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:13:13.496 00:13:13.496 --- 10.0.0.2 ping statistics --- 00:13:13.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.496 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:13:13.496 00:13:13.496 --- 10.0.0.1 ping statistics --- 00:13:13.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.496 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3491848 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3491848 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3491848 ']' 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.496 20:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:13.496 [2024-07-22 20:20:25.448878] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:13.496 [2024-07-22 20:20:25.448976] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.496 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.757 [2024-07-22 20:20:25.568744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.757 [2024-07-22 20:20:25.750404] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.757 [2024-07-22 20:20:25.750453] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.757 [2024-07-22 20:20:25.750468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.757 [2024-07-22 20:20:25.750478] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.757 [2024-07-22 20:20:25.750488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.757 [2024-07-22 20:20:25.750665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.757 [2024-07-22 20:20:25.750747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.757 [2024-07-22 20:20:25.750860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.757 [2024-07-22 20:20:25.750889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.330 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.330 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:14.330 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.330 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:14.330 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.330 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.330 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.330 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.330 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.330 [2024-07-22 20:20:26.228868] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.330 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.331 Malloc0 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.331 [2024-07-22 20:20:26.325708] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:14.331 test case1: single bdev can't be used in multiple subsystems 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.331 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.592 [2024-07-22 20:20:26.361611] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:14.592 [2024-07-22 20:20:26.361645] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:14.592 [2024-07-22 20:20:26.361662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.592 request: 00:13:14.592 { 00:13:14.592 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:14.592 "namespace": { 00:13:14.592 "bdev_name": "Malloc0", 00:13:14.592 "no_auto_visible": false 00:13:14.592 }, 00:13:14.592 "method": "nvmf_subsystem_add_ns", 00:13:14.592 "req_id": 1 00:13:14.592 } 00:13:14.592 Got JSON-RPC error response 00:13:14.592 response: 00:13:14.592 { 00:13:14.592 "code": -32602, 00:13:14.592 "message": "Invalid parameters" 00:13:14.592 } 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:14.592 Adding namespace failed - expected result. 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:14.592 test case2: host connect to nvmf target in multiple paths 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.592 [2024-07-22 20:20:26.373770] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.592 20:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.978 20:20:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:17.894 20:20:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.894 20:20:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:17.894 20:20:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.894 20:20:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:17.894 20:20:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:19.838 20:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:19.838 20:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:19.838 20:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.838 20:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:19.838 20:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.838 20:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:19.838 20:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:19.838 [global] 00:13:19.838 thread=1 00:13:19.838 invalidate=1 00:13:19.838 rw=write 00:13:19.838 time_based=1 00:13:19.838 runtime=1 00:13:19.838 ioengine=libaio 00:13:19.838 direct=1 00:13:19.838 bs=4096 00:13:19.838 iodepth=1 00:13:19.838 norandommap=0 00:13:19.838 numjobs=1 00:13:19.838 00:13:19.838 verify_dump=1 00:13:19.838 verify_backlog=512 00:13:19.838 verify_state_save=0 00:13:19.838 do_verify=1 00:13:19.838 verify=crc32c-intel 00:13:19.838 [job0] 00:13:19.838 filename=/dev/nvme0n1 00:13:19.838 Could not set queue depth (nvme0n1) 00:13:19.838 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:19.838 fio-3.35 00:13:19.838 Starting 1 thread 00:13:21.225 00:13:21.225 job0: (groupid=0, jobs=1): err= 0: pid=3493392: Mon Jul 22 20:20:32 2024 00:13:21.225 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:21.225 slat (nsec): min=7065, max=62032, avg=24821.42, stdev=4726.12 00:13:21.225 clat (usec): min=879, max=1330, avg=1123.01, stdev=79.90 00:13:21.225 lat (usec): min=903, max=1354, avg=1147.84, stdev=79.70 00:13:21.225 clat percentiles (usec): 00:13:21.225 | 1.00th=[ 914], 5.00th=[ 963], 10.00th=[ 1012], 20.00th=[ 1057], 00:13:21.225 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:13:21.225 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:13:21.225 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1336], 99.95th=[ 1336], 00:13:21.225 | 99.99th=[ 1336] 00:13:21.225 write: IOPS=558, BW=2234KiB/s (2287kB/s)(2236KiB/1001msec); 0 zone resets 00:13:21.225 slat (nsec): min=9104, max=65168, avg=26923.23, stdev=9114.35 00:13:21.225 clat (usec): min=412, max=923, avg=695.61, stdev=93.10 00:13:21.225 lat (usec): min=423, max=953, avg=722.54, stdev=97.35 00:13:21.225 clat percentiles (usec): 00:13:21.225 | 1.00th=[ 457], 5.00th=[ 523], 10.00th=[ 562], 20.00th=[ 627], 00:13:21.225 | 30.00th=[ 660], 40.00th=[ 676], 50.00th=[ 693], 60.00th=[ 725], 00:13:21.225 | 70.00th=[ 758], 80.00th=[ 775], 90.00th=[ 807], 95.00th=[ 832], 00:13:21.225 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 922], 99.95th=[ 922], 00:13:21.225 | 99.99th=[ 922] 00:13:21.225 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:21.225 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:21.225 lat (usec) : 500=1.87%, 750=33.80%, 1000=20.45% 00:13:21.225 lat (msec) : 2=43.88% 00:13:21.225 cpu : usr=1.90%, sys=2.50%, ctx=1071, majf=0, minf=1 00:13:21.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.225 issued rwts: total=512,559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:21.225 00:13:21.225 Run status group 0 (all jobs): 00:13:21.225 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:13:21.225 WRITE: bw=2234KiB/s (2287kB/s), 2234KiB/s-2234KiB/s (2287kB/s-2287kB/s), io=2236KiB (2290kB), run=1001-1001msec 00:13:21.225 00:13:21.225 Disk stats (read/write): 00:13:21.225 nvme0n1: ios=501/512, merge=0/0, ticks=569/339, in_queue=908, util=94.09% 00:13:21.225 20:20:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.484 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.484 rmmod nvme_tcp 00:13:21.484 rmmod nvme_fabrics 00:13:21.484 rmmod nvme_keyring 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3491848 ']' 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3491848 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3491848 ']' 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3491848 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3491848 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3491848' 00:13:21.744 killing process with pid 3491848 00:13:21.744 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3491848 00:13:21.745 20:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3491848 00:13:22.687 20:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.687 20:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:22.687 20:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:22.687 20:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.687 20:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.687 20:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.687 20:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.687 20:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.604 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:24.604 00:13:24.604 real 0m18.372s 00:13:24.604 user 0m47.238s 00:13:24.604 sys 0m6.083s 00:13:24.604 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.604 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:24.604 ************************************ 00:13:24.604 END TEST nvmf_nmic 00:13:24.604 ************************************ 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:24.866 ************************************ 00:13:24.866 START TEST nvmf_fio_target 00:13:24.866 ************************************ 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:24.866 * Looking for test storage... 00:13:24.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.866 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.867 20:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.014 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.014 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:33.014 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:33.014 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:33.014 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:33.014 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:33.014 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:33.014 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:33.014 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:33.014 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:33.014 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:33.015 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:33.015 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:33.015 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:33.015 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:33.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:13:33.015 00:13:33.015 --- 10.0.0.2 ping statistics --- 00:13:33.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.015 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:13:33.015 20:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:13:33.015 00:13:33.015 --- 10.0.0.1 ping statistics --- 00:13:33.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.015 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3498042 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3498042 00:13:33.015 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3498042 ']' 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.016 [2024-07-22 20:20:44.144638] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:33.016 [2024-07-22 20:20:44.144737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.016 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.016 [2024-07-22 20:20:44.264647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.016 [2024-07-22 20:20:44.449085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.016 [2024-07-22 20:20:44.449130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.016 [2024-07-22 20:20:44.449143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.016 [2024-07-22 20:20:44.449152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.016 [2024-07-22 20:20:44.449162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.016 [2024-07-22 20:20:44.452231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.016 [2024-07-22 20:20:44.452544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.016 [2024-07-22 20:20:44.452661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.016 [2024-07-22 20:20:44.452684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.016 20:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:33.276 [2024-07-22 20:20:45.072455] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.276 20:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:33.536 20:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:33.536 20:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:33.536 20:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:33.536 20:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:33.797 20:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:33.797 20:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.058 20:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:34.058 20:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:34.318 20:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.579 20:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:34.579 20:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.579 20:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:34.579 20:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.840 20:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:34.840 20:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:35.101 20:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:35.361 20:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:35.361 20:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:35.361 20:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:35.361 20:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:35.622 20:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.622 [2024-07-22 20:20:47.620578] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.883 20:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:35.883 20:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:36.143 20:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.056 20:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:38.056 20:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:38.056 20:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.056 20:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:38.056 20:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:38.056 20:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:39.969 20:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:39.969 20:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:39.969 20:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.969 20:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:39.969 20:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.969 20:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:39.969 20:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:39.969 [global] 00:13:39.969 thread=1 00:13:39.969 invalidate=1 00:13:39.969 rw=write 00:13:39.969 time_based=1 00:13:39.969 runtime=1 00:13:39.969 ioengine=libaio 00:13:39.969 direct=1 00:13:39.969 bs=4096 00:13:39.969 iodepth=1 00:13:39.969 norandommap=0 00:13:39.969 numjobs=1 00:13:39.969 00:13:39.969 verify_dump=1 00:13:39.969 verify_backlog=512 00:13:39.969 verify_state_save=0 00:13:39.969 do_verify=1 00:13:39.969 verify=crc32c-intel 00:13:39.969 [job0] 00:13:39.969 filename=/dev/nvme0n1 00:13:39.969 [job1] 00:13:39.969 filename=/dev/nvme0n2 00:13:39.969 [job2] 00:13:39.969 filename=/dev/nvme0n3 00:13:39.969 [job3] 00:13:39.969 filename=/dev/nvme0n4 00:13:39.969 Could not set queue depth (nvme0n1) 00:13:39.969 Could not set queue depth (nvme0n2) 00:13:39.969 Could not set queue depth (nvme0n3) 00:13:39.969 Could not set queue depth (nvme0n4) 00:13:40.230 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.230 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.230 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.230 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.230 fio-3.35 00:13:40.230 Starting 4 threads 00:13:41.615 00:13:41.615 job0: (groupid=0, jobs=1): err= 0: pid=3499668: Mon Jul 22 20:20:53 2024 00:13:41.615 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:41.615 slat (nsec): min=13243, max=60181, avg=27212.48, stdev=3512.82 00:13:41.615 clat (usec): min=469, max=1302, avg=1063.45, stdev=132.06 00:13:41.615 lat (usec): min=497, max=1328, avg=1090.67, stdev=132.10 00:13:41.615 clat percentiles (usec): 00:13:41.615 | 1.00th=[ 693], 5.00th=[ 857], 10.00th=[ 914], 20.00th=[ 947], 00:13:41.615 | 30.00th=[ 988], 40.00th=[ 1029], 50.00th=[ 1090], 60.00th=[ 1139], 00:13:41.615 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1237], 00:13:41.615 | 99.00th=[ 1287], 99.50th=[ 1287], 99.90th=[ 1303], 99.95th=[ 1303], 00:13:41.615 | 99.99th=[ 1303] 00:13:41.615 write: IOPS=680, BW=2721KiB/s (2787kB/s)(2724KiB/1001msec); 0 zone resets 00:13:41.615 slat (usec): min=9, max=1068, avg=33.21, stdev=41.16 00:13:41.615 clat (usec): min=218, max=998, avg=601.74, stdev=136.17 00:13:41.615 lat (usec): min=252, max=1787, avg=634.95, stdev=146.08 00:13:41.615 clat percentiles (usec): 00:13:41.615 | 1.00th=[ 334], 5.00th=[ 371], 10.00th=[ 420], 20.00th=[ 482], 00:13:41.615 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 635], 00:13:41.615 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 832], 00:13:41.615 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 996], 99.95th=[ 996], 00:13:41.615 | 99.99th=[ 996] 00:13:41.615 bw ( KiB/s): min= 4096, max= 4096, per=38.46%, avg=4096.00, stdev= 0.00, samples=1 00:13:41.615 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:41.615 lat (usec) : 250=0.08%, 500=13.41%, 750=36.88%, 1000=21.21% 00:13:41.615 lat (msec) : 2=28.42% 00:13:41.615 cpu : usr=2.40%, sys=4.80%, ctx=1195, majf=0, minf=1 00:13:41.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:41.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.615 issued rwts: total=512,681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:41.615 job1: (groupid=0, jobs=1): err= 0: pid=3499676: Mon Jul 22 20:20:53 2024 00:13:41.615 read: IOPS=521, BW=2088KiB/s (2138kB/s)(2092KiB/1002msec) 00:13:41.615 slat (nsec): min=6615, max=45602, avg=23992.26, stdev=6332.13 00:13:41.615 clat (usec): min=255, max=4790, avg=934.06, stdev=346.24 00:13:41.615 lat (usec): min=263, max=4816, avg=958.05, stdev=348.27 00:13:41.615 clat percentiles (usec): 00:13:41.615 | 1.00th=[ 408], 5.00th=[ 482], 10.00th=[ 529], 20.00th=[ 586], 00:13:41.615 | 30.00th=[ 619], 40.00th=[ 816], 50.00th=[ 1106], 60.00th=[ 1139], 00:13:41.615 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1270], 00:13:41.615 | 99.00th=[ 1319], 99.50th=[ 1336], 99.90th=[ 4817], 99.95th=[ 4817], 00:13:41.615 | 99.99th=[ 4817] 00:13:41.615 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:13:41.615 slat (nsec): min=9529, max=51593, avg=25586.28, stdev=11182.37 00:13:41.615 clat (usec): min=129, max=3662, avg=453.11, stdev=200.00 00:13:41.615 lat (usec): min=140, max=3695, avg=478.69, stdev=201.46 00:13:41.615 clat percentiles (usec): 00:13:41.615 | 1.00th=[ 184], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 297], 00:13:41.615 | 30.00th=[ 355], 40.00th=[ 379], 50.00th=[ 400], 60.00th=[ 420], 00:13:41.615 | 70.00th=[ 490], 80.00th=[ 635], 90.00th=[ 734], 95.00th=[ 799], 00:13:41.615 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 930], 99.95th=[ 3654], 00:13:41.615 | 99.99th=[ 3654] 00:13:41.615 bw ( KiB/s): min= 4096, max= 4096, per=38.46%, avg=4096.00, stdev= 0.00, samples=2 00:13:41.615 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:13:41.615 lat (usec) : 250=2.59%, 500=46.80%, 750=24.56%, 1000=6.27% 00:13:41.615 lat (msec) : 2=19.65%, 4=0.06%, 10=0.06% 00:13:41.615 cpu : usr=1.90%, sys=4.10%, ctx=1548, majf=0, minf=1 00:13:41.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:41.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.615 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:41.615 job2: (groupid=0, jobs=1): err= 0: pid=3499682: Mon Jul 22 20:20:53 2024 00:13:41.615 read: IOPS=17, BW=70.2KiB/s (71.9kB/s)(72.0KiB/1025msec) 00:13:41.615 slat (nsec): min=13706, max=30622, avg=25686.39, stdev=3515.23 00:13:41.615 clat (usec): min=1024, max=42265, avg=39657.32, stdev=9644.97 00:13:41.615 lat (usec): min=1051, max=42290, avg=39683.01, stdev=9644.70 00:13:41.615 clat percentiles (usec): 00:13:41.615 | 1.00th=[ 1029], 5.00th=[ 1029], 10.00th=[41157], 20.00th=[41681], 00:13:41.615 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:41.615 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:41.615 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:41.615 | 99.99th=[42206] 00:13:41.615 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:13:41.615 slat (usec): min=10, max=23307, avg=74.90, stdev=1028.84 00:13:41.615 clat (usec): min=145, max=890, avg=525.14, stdev=129.82 00:13:41.615 lat (usec): min=157, max=23752, avg=600.05, stdev=1033.68 00:13:41.615 clat percentiles (usec): 00:13:41.615 | 1.00th=[ 194], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 424], 00:13:41.615 | 30.00th=[ 453], 40.00th=[ 490], 50.00th=[ 529], 60.00th=[ 562], 00:13:41.615 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 693], 95.00th=[ 734], 00:13:41.615 | 99.00th=[ 807], 99.50th=[ 824], 99.90th=[ 889], 99.95th=[ 889], 00:13:41.615 | 99.99th=[ 889] 00:13:41.615 bw ( KiB/s): min= 4096, max= 4096, per=38.46%, avg=4096.00, stdev= 0.00, samples=1 00:13:41.615 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:41.615 lat (usec) : 250=1.32%, 500=40.75%, 750=51.32%, 1000=3.21% 00:13:41.615 lat (msec) : 2=0.19%, 50=3.21% 00:13:41.615 cpu : usr=0.59%, sys=1.56%, ctx=532, majf=0, minf=1 00:13:41.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:41.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.615 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:41.615 job3: (groupid=0, jobs=1): err= 0: pid=3499688: Mon Jul 22 20:20:53 2024 00:13:41.615 read: IOPS=14, BW=59.6KiB/s (61.0kB/s)(60.0KiB/1007msec) 00:13:41.615 slat (nsec): min=25335, max=26423, avg=25582.73, stdev=281.20 00:13:41.615 clat (usec): min=1205, max=42171, avg=39261.86, stdev=10528.23 00:13:41.615 lat (usec): min=1231, max=42196, avg=39287.45, stdev=10528.13 00:13:41.615 clat percentiles (usec): 00:13:41.615 | 1.00th=[ 1205], 5.00th=[ 1205], 10.00th=[41681], 20.00th=[41681], 00:13:41.615 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:41.615 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:41.615 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:41.615 | 99.99th=[42206] 00:13:41.615 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:13:41.615 slat (usec): min=10, max=5439, avg=41.73, stdev=239.16 00:13:41.616 clat (usec): min=308, max=1574, avg=765.49, stdev=172.78 00:13:41.616 lat (usec): min=342, max=6163, avg=807.22, stdev=294.42 00:13:41.616 clat percentiles (usec): 00:13:41.616 | 1.00th=[ 441], 5.00th=[ 498], 10.00th=[ 570], 20.00th=[ 619], 00:13:41.616 | 30.00th=[ 668], 40.00th=[ 701], 50.00th=[ 742], 60.00th=[ 791], 00:13:41.616 | 70.00th=[ 840], 80.00th=[ 906], 90.00th=[ 996], 95.00th=[ 1074], 00:13:41.616 | 99.00th=[ 1221], 99.50th=[ 1287], 99.90th=[ 1582], 99.95th=[ 1582], 00:13:41.616 | 99.99th=[ 1582] 00:13:41.616 bw ( KiB/s): min= 4096, max= 4096, per=38.46%, avg=4096.00, stdev= 0.00, samples=1 00:13:41.616 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:41.616 lat (usec) : 500=4.93%, 750=45.92%, 1000=36.62% 00:13:41.616 lat (msec) : 2=9.87%, 50=2.66% 00:13:41.616 cpu : usr=0.89%, sys=1.39%, ctx=529, majf=0, minf=1 00:13:41.616 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:41.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.616 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.616 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:41.616 00:13:41.616 Run status group 0 (all jobs): 00:13:41.616 READ: bw=4168KiB/s (4268kB/s), 59.6KiB/s-2088KiB/s (61.0kB/s-2138kB/s), io=4272KiB (4375kB), run=1001-1025msec 00:13:41.616 WRITE: bw=10.4MiB/s (10.9MB/s), 1998KiB/s-4088KiB/s (2046kB/s-4186kB/s), io=10.7MiB (11.2MB), run=1001-1025msec 00:13:41.616 00:13:41.616 Disk stats (read/write): 00:13:41.616 nvme0n1: ios=502/512, merge=0/0, ticks=504/253, in_queue=757, util=83.97% 00:13:41.616 nvme0n2: ios=565/628, merge=0/0, ticks=1080/320, in_queue=1400, util=88.47% 00:13:41.616 nvme0n3: ios=63/512, merge=0/0, ticks=734/245, in_queue=979, util=94.19% 00:13:41.616 nvme0n4: ios=60/512, merge=0/0, ticks=620/377, in_queue=997, util=96.15% 00:13:41.616 20:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:41.616 [global] 00:13:41.616 thread=1 00:13:41.616 invalidate=1 00:13:41.616 rw=randwrite 00:13:41.616 time_based=1 00:13:41.616 runtime=1 00:13:41.616 ioengine=libaio 00:13:41.616 direct=1 00:13:41.616 bs=4096 00:13:41.616 iodepth=1 00:13:41.616 norandommap=0 00:13:41.616 numjobs=1 00:13:41.616 00:13:41.616 verify_dump=1 00:13:41.616 verify_backlog=512 00:13:41.616 verify_state_save=0 00:13:41.616 do_verify=1 00:13:41.616 verify=crc32c-intel 00:13:41.616 [job0] 00:13:41.616 filename=/dev/nvme0n1 00:13:41.616 [job1] 00:13:41.616 filename=/dev/nvme0n2 00:13:41.616 [job2] 00:13:41.616 filename=/dev/nvme0n3 00:13:41.616 [job3] 00:13:41.616 filename=/dev/nvme0n4 00:13:41.616 Could not set queue depth (nvme0n1) 00:13:41.616 Could not set queue depth (nvme0n2) 00:13:41.616 Could not set queue depth (nvme0n3) 00:13:41.616 Could not set queue depth (nvme0n4) 00:13:41.876 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:41.876 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:41.876 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:41.876 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:41.876 fio-3.35 00:13:41.876 Starting 4 threads 00:13:43.291 00:13:43.291 job0: (groupid=0, jobs=1): err= 0: pid=3500193: Mon Jul 22 20:20:54 2024 00:13:43.291 read: IOPS=17, BW=70.8KiB/s (72.5kB/s)(72.0KiB/1017msec) 00:13:43.291 slat (nsec): min=25615, max=26207, avg=25885.89, stdev=168.07 00:13:43.291 clat (usec): min=40979, max=42023, avg=41782.95, stdev=360.09 00:13:43.291 lat (usec): min=41005, max=42049, avg=41808.83, stdev=360.05 00:13:43.291 clat percentiles (usec): 00:13:43.291 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:13:43.291 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:13:43.291 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:43.291 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:43.291 | 99.99th=[42206] 00:13:43.291 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:13:43.291 slat (nsec): min=9599, max=56873, avg=29910.77, stdev=8468.30 00:13:43.291 clat (usec): min=183, max=941, avg=477.86, stdev=94.27 00:13:43.291 lat (usec): min=194, max=981, avg=507.77, stdev=95.50 00:13:43.291 clat percentiles (usec): 00:13:43.291 | 1.00th=[ 306], 5.00th=[ 334], 10.00th=[ 363], 20.00th=[ 420], 00:13:43.291 | 30.00th=[ 441], 40.00th=[ 453], 50.00th=[ 465], 60.00th=[ 478], 00:13:43.291 | 70.00th=[ 502], 80.00th=[ 545], 90.00th=[ 611], 95.00th=[ 644], 00:13:43.291 | 99.00th=[ 775], 99.50th=[ 840], 99.90th=[ 938], 99.95th=[ 938], 00:13:43.291 | 99.99th=[ 938] 00:13:43.291 bw ( KiB/s): min= 4096, max= 4096, per=51.10%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.291 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.291 lat (usec) : 250=0.57%, 500=66.79%, 750=27.92%, 1000=1.32% 00:13:43.291 lat (msec) : 50=3.40% 00:13:43.291 cpu : usr=0.59%, sys=1.77%, ctx=531, majf=0, minf=1 00:13:43.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.291 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.291 job1: (groupid=0, jobs=1): err= 0: pid=3500207: Mon Jul 22 20:20:54 2024 00:13:43.291 read: IOPS=16, BW=67.6KiB/s (69.2kB/s)(68.0KiB/1006msec) 00:13:43.291 slat (nsec): min=24205, max=43061, avg=25513.76, stdev=4530.39 00:13:43.291 clat (usec): min=1197, max=42003, avg=39478.89, stdev=9868.32 00:13:43.291 lat (usec): min=1222, max=42028, avg=39504.41, stdev=9868.60 00:13:43.291 clat percentiles (usec): 00:13:43.291 | 1.00th=[ 1205], 5.00th=[ 1205], 10.00th=[41157], 20.00th=[41681], 00:13:43.291 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:43.291 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:43.291 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:43.291 | 99.99th=[42206] 00:13:43.291 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:13:43.291 slat (nsec): min=8926, max=63303, avg=27987.21, stdev=8011.02 00:13:43.291 clat (usec): min=319, max=957, avg=616.55, stdev=117.89 00:13:43.291 lat (usec): min=344, max=987, avg=644.54, stdev=119.85 00:13:43.291 clat percentiles (usec): 00:13:43.291 | 1.00th=[ 351], 5.00th=[ 416], 10.00th=[ 474], 20.00th=[ 515], 00:13:43.291 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:13:43.291 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 807], 00:13:43.291 | 99.00th=[ 914], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:13:43.291 | 99.99th=[ 955] 00:13:43.291 bw ( KiB/s): min= 4096, max= 4096, per=51.10%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.291 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.291 lat (usec) : 500=16.45%, 750=68.05%, 1000=12.29% 00:13:43.291 lat (msec) : 2=0.19%, 50=3.02% 00:13:43.291 cpu : usr=0.30%, sys=1.89%, ctx=529, majf=0, minf=1 00:13:43.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.291 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.291 job2: (groupid=0, jobs=1): err= 0: pid=3500223: Mon Jul 22 20:20:54 2024 00:13:43.291 read: IOPS=14, BW=58.9KiB/s (60.4kB/s)(60.0KiB/1018msec) 00:13:43.291 slat (nsec): min=24171, max=24711, avg=24465.87, stdev=147.51 00:13:43.291 clat (usec): min=41922, max=42223, avg=41976.49, stdev=72.07 00:13:43.291 lat (usec): min=41946, max=42248, avg=42000.95, stdev=72.14 00:13:43.291 clat percentiles (usec): 00:13:43.291 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:13:43.291 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:43.291 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:43.291 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:43.291 | 99.99th=[42206] 00:13:43.291 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:13:43.291 slat (nsec): min=9181, max=49484, avg=27605.37, stdev=8070.37 00:13:43.291 clat (usec): min=408, max=919, avg=721.97, stdev=95.86 00:13:43.291 lat (usec): min=417, max=964, avg=749.57, stdev=99.47 00:13:43.291 clat percentiles (usec): 00:13:43.291 | 1.00th=[ 449], 5.00th=[ 545], 10.00th=[ 586], 20.00th=[ 652], 00:13:43.291 | 30.00th=[ 676], 40.00th=[ 709], 50.00th=[ 734], 60.00th=[ 758], 00:13:43.291 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 832], 95.00th=[ 857], 00:13:43.291 | 99.00th=[ 881], 99.50th=[ 914], 99.90th=[ 922], 99.95th=[ 922], 00:13:43.291 | 99.99th=[ 922] 00:13:43.291 bw ( KiB/s): min= 4096, max= 4096, per=51.10%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.291 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.291 lat (usec) : 500=2.66%, 750=53.13%, 1000=41.37% 00:13:43.291 lat (msec) : 50=2.85% 00:13:43.291 cpu : usr=0.59%, sys=1.47%, ctx=527, majf=0, minf=1 00:13:43.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.291 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.291 job3: (groupid=0, jobs=1): err= 0: pid=3500234: Mon Jul 22 20:20:54 2024 00:13:43.291 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1022msec) 00:13:43.291 slat (nsec): min=25671, max=26181, avg=25894.53, stdev=122.58 00:13:43.292 clat (usec): min=40893, max=41957, avg=41041.39, stdev=248.87 00:13:43.292 lat (usec): min=40919, max=41983, avg=41067.29, stdev=248.90 00:13:43.292 clat percentiles (usec): 00:13:43.292 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:43.292 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:43.292 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:13:43.292 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:43.292 | 99.99th=[42206] 00:13:43.292 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:13:43.292 slat (nsec): min=8801, max=51319, avg=29555.24, stdev=8562.34 00:13:43.292 clat (usec): min=261, max=927, avg=594.08, stdev=127.68 00:13:43.292 lat (usec): min=271, max=959, avg=623.64, stdev=131.36 00:13:43.292 clat percentiles (usec): 00:13:43.292 | 1.00th=[ 277], 5.00th=[ 375], 10.00th=[ 424], 20.00th=[ 490], 00:13:43.292 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:13:43.292 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 816], 00:13:43.292 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 930], 99.95th=[ 930], 00:13:43.292 | 99.99th=[ 930] 00:13:43.292 bw ( KiB/s): min= 4096, max= 4096, per=51.10%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.292 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.292 lat (usec) : 500=20.60%, 750=67.30%, 1000=8.88% 00:13:43.292 lat (msec) : 50=3.21% 00:13:43.292 cpu : usr=1.08%, sys=1.86%, ctx=529, majf=0, minf=1 00:13:43.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.292 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.292 00:13:43.292 Run status group 0 (all jobs): 00:13:43.292 READ: bw=262KiB/s (269kB/s), 58.9KiB/s-70.8KiB/s (60.4kB/s-72.5kB/s), io=268KiB (274kB), run=1006-1022msec 00:13:43.292 WRITE: bw=8016KiB/s (8208kB/s), 2004KiB/s-2036KiB/s (2052kB/s-2085kB/s), io=8192KiB (8389kB), run=1006-1022msec 00:13:43.292 00:13:43.292 Disk stats (read/write): 00:13:43.292 nvme0n1: ios=38/512, merge=0/0, ticks=1488/247, in_queue=1735, util=96.39% 00:13:43.292 nvme0n2: ios=37/512, merge=0/0, ticks=466/311, in_queue=777, util=83.64% 00:13:43.292 nvme0n3: ios=26/512, merge=0/0, ticks=423/360, in_queue=783, util=87.60% 00:13:43.292 nvme0n4: ios=11/512, merge=0/0, ticks=452/218, in_queue=670, util=89.00% 00:13:43.292 20:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:43.292 [global] 00:13:43.292 thread=1 00:13:43.292 invalidate=1 00:13:43.292 rw=write 00:13:43.292 time_based=1 00:13:43.292 runtime=1 00:13:43.292 ioengine=libaio 00:13:43.292 direct=1 00:13:43.292 bs=4096 00:13:43.292 iodepth=128 00:13:43.292 norandommap=0 00:13:43.292 numjobs=1 00:13:43.292 00:13:43.292 verify_dump=1 00:13:43.292 verify_backlog=512 00:13:43.292 verify_state_save=0 00:13:43.292 do_verify=1 00:13:43.292 verify=crc32c-intel 00:13:43.292 [job0] 00:13:43.292 filename=/dev/nvme0n1 00:13:43.292 [job1] 00:13:43.292 filename=/dev/nvme0n2 00:13:43.292 [job2] 00:13:43.292 filename=/dev/nvme0n3 00:13:43.292 [job3] 00:13:43.292 filename=/dev/nvme0n4 00:13:43.292 Could not set queue depth (nvme0n1) 00:13:43.292 Could not set queue depth (nvme0n2) 00:13:43.292 Could not set queue depth (nvme0n3) 00:13:43.292 Could not set queue depth (nvme0n4) 00:13:43.594 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:43.594 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:43.594 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:43.594 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:43.594 fio-3.35 00:13:43.594 Starting 4 threads 00:13:44.989 00:13:44.989 job0: (groupid=0, jobs=1): err= 0: pid=3500740: Mon Jul 22 20:20:56 2024 00:13:44.989 read: IOPS=7323, BW=28.6MiB/s (30.0MB/s)(28.9MiB/1009msec) 00:13:44.989 slat (nsec): min=919, max=6982.7k, avg=64872.78, stdev=434106.38 00:13:44.989 clat (usec): min=2367, max=19687, avg=8832.37, stdev=2485.70 00:13:44.989 lat (usec): min=2391, max=19725, avg=8897.24, stdev=2503.94 00:13:44.989 clat percentiles (usec): 00:13:44.989 | 1.00th=[ 4555], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 6783], 00:13:44.989 | 30.00th=[ 7439], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 8979], 00:13:44.989 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[12125], 95.00th=[13304], 00:13:44.989 | 99.00th=[16450], 99.50th=[16581], 99.90th=[17957], 99.95th=[18220], 00:13:44.989 | 99.99th=[19792] 00:13:44.989 write: IOPS=7611, BW=29.7MiB/s (31.2MB/s)(30.0MiB/1009msec); 0 zone resets 00:13:44.989 slat (nsec): min=1613, max=11686k, avg=61568.90, stdev=427660.34 00:13:44.989 clat (usec): min=1917, max=31686, avg=8088.50, stdev=4012.25 00:13:44.989 lat (usec): min=1921, max=31721, avg=8150.07, stdev=4040.47 00:13:44.989 clat percentiles (usec): 00:13:44.989 | 1.00th=[ 2966], 5.00th=[ 4047], 10.00th=[ 4490], 20.00th=[ 5211], 00:13:44.989 | 30.00th=[ 6063], 40.00th=[ 6915], 50.00th=[ 7242], 60.00th=[ 7701], 00:13:44.989 | 70.00th=[ 8225], 80.00th=[10159], 90.00th=[12387], 95.00th=[15926], 00:13:44.989 | 99.00th=[24773], 99.50th=[27657], 99.90th=[29492], 99.95th=[29492], 00:13:44.989 | 99.99th=[31589] 00:13:44.989 bw ( KiB/s): min=25184, max=36256, per=31.30%, avg=30720.00, stdev=7829.09, samples=2 00:13:44.989 iops : min= 6296, max= 9064, avg=7680.00, stdev=1957.27, samples=2 00:13:44.989 lat (msec) : 2=0.04%, 4=2.31%, 10=73.76%, 20=22.54%, 50=1.35% 00:13:44.989 cpu : usr=4.96%, sys=6.85%, ctx=633, majf=0, minf=1 00:13:44.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:44.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:44.989 issued rwts: total=7389,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:44.989 job1: (groupid=0, jobs=1): err= 0: pid=3500746: Mon Jul 22 20:20:56 2024 00:13:44.989 read: IOPS=6685, BW=26.1MiB/s (27.4MB/s)(26.4MiB/1010msec) 00:13:44.989 slat (nsec): min=873, max=22036k, avg=75254.62, stdev=590321.36 00:13:44.989 clat (usec): min=2202, max=33226, avg=10599.14, stdev=3680.35 00:13:44.989 lat (usec): min=2209, max=33232, avg=10674.39, stdev=3712.55 00:13:44.989 clat percentiles (usec): 00:13:44.989 | 1.00th=[ 4293], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7832], 00:13:44.989 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11076], 00:13:44.989 | 70.00th=[11600], 80.00th=[12780], 90.00th=[14746], 95.00th=[15139], 00:13:44.989 | 99.00th=[26084], 99.50th=[27657], 99.90th=[33162], 99.95th=[33162], 00:13:44.989 | 99.99th=[33162] 00:13:44.989 write: IOPS=7097, BW=27.7MiB/s (29.1MB/s)(28.0MiB/1010msec); 0 zone resets 00:13:44.989 slat (nsec): min=1543, max=6089.0k, avg=52802.28, stdev=378342.63 00:13:44.989 clat (usec): min=861, max=29836, avg=7899.19, stdev=2791.49 00:13:44.989 lat (usec): min=983, max=29838, avg=7951.99, stdev=2811.79 00:13:44.989 clat percentiles (usec): 00:13:44.989 | 1.00th=[ 2376], 5.00th=[ 4178], 10.00th=[ 4752], 20.00th=[ 5604], 00:13:44.989 | 30.00th=[ 6194], 40.00th=[ 6849], 50.00th=[ 7701], 60.00th=[ 8455], 00:13:44.989 | 70.00th=[ 8979], 80.00th=[10159], 90.00th=[11469], 95.00th=[12125], 00:13:44.989 | 99.00th=[16188], 99.50th=[16712], 99.90th=[25297], 99.95th=[25297], 00:13:44.989 | 99.99th=[29754] 00:13:44.989 bw ( KiB/s): min=28424, max=28672, per=29.09%, avg=28548.00, stdev=175.36, samples=2 00:13:44.989 iops : min= 7106, max= 7168, avg=7137.00, stdev=43.84, samples=2 00:13:44.989 lat (usec) : 1000=0.01% 00:13:44.989 lat (msec) : 2=0.23%, 4=2.15%, 10=60.77%, 20=35.73%, 50=1.11% 00:13:44.989 cpu : usr=4.26%, sys=7.83%, ctx=467, majf=0, minf=1 00:13:44.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:44.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:44.989 issued rwts: total=6752,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:44.989 job2: (groupid=0, jobs=1): err= 0: pid=3500768: Mon Jul 22 20:20:56 2024 00:13:44.989 read: IOPS=4966, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1004msec) 00:13:44.989 slat (nsec): min=919, max=11095k, avg=104511.29, stdev=749876.66 00:13:44.989 clat (usec): min=2967, max=23389, avg=12848.37, stdev=3243.34 00:13:44.989 lat (usec): min=4535, max=23420, avg=12952.88, stdev=3282.29 00:13:44.989 clat percentiles (usec): 00:13:44.989 | 1.00th=[ 6194], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10683], 00:13:44.989 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:13:44.989 | 70.00th=[13698], 80.00th=[15401], 90.00th=[17957], 95.00th=[19530], 00:13:44.989 | 99.00th=[21890], 99.50th=[21890], 99.90th=[22676], 99.95th=[22676], 00:13:44.989 | 99.99th=[23462] 00:13:44.989 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:13:44.989 slat (nsec): min=1663, max=13751k, avg=88363.80, stdev=456545.40 00:13:44.989 clat (usec): min=2732, max=29702, avg=12258.49, stdev=4295.29 00:13:44.989 lat (usec): min=2740, max=32977, avg=12346.86, stdev=4319.44 00:13:44.989 clat percentiles (usec): 00:13:44.989 | 1.00th=[ 3982], 5.00th=[ 6259], 10.00th=[ 7570], 20.00th=[ 9241], 00:13:44.989 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:13:44.989 | 70.00th=[12518], 80.00th=[13042], 90.00th=[19006], 95.00th=[21103], 00:13:44.989 | 99.00th=[26608], 99.50th=[26870], 99.90th=[29754], 99.95th=[29754], 00:13:44.989 | 99.99th=[29754] 00:13:44.989 bw ( KiB/s): min=20016, max=20944, per=20.87%, avg=20480.00, stdev=656.20, samples=2 00:13:44.989 iops : min= 5004, max= 5236, avg=5120.00, stdev=164.05, samples=2 00:13:44.989 lat (msec) : 4=0.58%, 10=15.85%, 20=77.47%, 50=6.10% 00:13:44.989 cpu : usr=3.39%, sys=5.28%, ctx=596, majf=0, minf=1 00:13:44.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:44.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:44.989 issued rwts: total=4986,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:44.989 job3: (groupid=0, jobs=1): err= 0: pid=3500774: Mon Jul 22 20:20:56 2024 00:13:44.989 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:13:44.989 slat (nsec): min=910, max=8516.5k, avg=113037.18, stdev=661232.14 00:13:44.989 clat (usec): min=2856, max=32466, avg=14531.56, stdev=5703.87 00:13:44.989 lat (usec): min=2862, max=32473, avg=14644.60, stdev=5719.74 00:13:44.989 clat percentiles (usec): 00:13:44.989 | 1.00th=[ 4228], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9634], 00:13:44.989 | 30.00th=[10814], 40.00th=[11731], 50.00th=[12780], 60.00th=[14615], 00:13:44.989 | 70.00th=[17433], 80.00th=[20055], 90.00th=[22938], 95.00th=[23987], 00:13:44.989 | 99.00th=[30802], 99.50th=[32375], 99.90th=[32375], 99.95th=[32375], 00:13:44.989 | 99.99th=[32375] 00:13:44.989 write: IOPS=4805, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1002msec); 0 zone resets 00:13:44.989 slat (nsec): min=1538, max=8909.9k, avg=90921.59, stdev=543609.54 00:13:44.989 clat (usec): min=1323, max=24922, avg=12494.75, stdev=5312.46 00:13:44.989 lat (usec): min=1334, max=24932, avg=12585.67, stdev=5327.78 00:13:44.989 clat percentiles (usec): 00:13:44.989 | 1.00th=[ 3097], 5.00th=[ 5342], 10.00th=[ 6587], 20.00th=[ 7701], 00:13:44.989 | 30.00th=[ 9241], 40.00th=[10814], 50.00th=[11076], 60.00th=[12911], 00:13:44.989 | 70.00th=[14877], 80.00th=[18220], 90.00th=[21365], 95.00th=[22152], 00:13:44.989 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24773], 99.95th=[25035], 00:13:44.989 | 99.99th=[25035] 00:13:44.989 bw ( KiB/s): min=13808, max=23696, per=19.11%, avg=18752.00, stdev=6991.87, samples=2 00:13:44.989 iops : min= 3452, max= 5924, avg=4688.00, stdev=1747.97, samples=2 00:13:44.989 lat (msec) : 2=0.42%, 4=1.11%, 10=27.55%, 20=55.25%, 50=15.66% 00:13:44.989 cpu : usr=3.30%, sys=4.20%, ctx=440, majf=0, minf=1 00:13:44.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:44.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:44.989 issued rwts: total=4608,4815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:44.989 00:13:44.989 Run status group 0 (all jobs): 00:13:44.989 READ: bw=91.8MiB/s (96.3MB/s), 18.0MiB/s-28.6MiB/s (18.8MB/s-30.0MB/s), io=92.7MiB (97.2MB), run=1002-1010msec 00:13:44.989 WRITE: bw=95.8MiB/s (101MB/s), 18.8MiB/s-29.7MiB/s (19.7MB/s-31.2MB/s), io=96.8MiB (102MB), run=1002-1010msec 00:13:44.989 00:13:44.989 Disk stats (read/write): 00:13:44.989 nvme0n1: ios=5949/6144, merge=0/0, ticks=41985/36207, in_queue=78192, util=96.69% 00:13:44.989 nvme0n2: ios=5848/6144, merge=0/0, ticks=42349/36972, in_queue=79321, util=93.17% 00:13:44.990 nvme0n3: ios=4121/4119, merge=0/0, ticks=47437/43240, in_queue=90677, util=97.26% 00:13:44.990 nvme0n4: ios=4123/4320, merge=0/0, ticks=23232/19543, in_queue=42775, util=92.74% 00:13:44.990 20:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:44.990 [global] 00:13:44.990 thread=1 00:13:44.990 invalidate=1 00:13:44.990 rw=randwrite 00:13:44.990 time_based=1 00:13:44.990 runtime=1 00:13:44.990 ioengine=libaio 00:13:44.990 direct=1 00:13:44.990 bs=4096 00:13:44.990 iodepth=128 00:13:44.990 norandommap=0 00:13:44.990 numjobs=1 00:13:44.990 00:13:44.990 verify_dump=1 00:13:44.990 verify_backlog=512 00:13:44.990 verify_state_save=0 00:13:44.990 do_verify=1 00:13:44.990 verify=crc32c-intel 00:13:44.990 [job0] 00:13:44.990 filename=/dev/nvme0n1 00:13:44.990 [job1] 00:13:44.990 filename=/dev/nvme0n2 00:13:44.990 [job2] 00:13:44.990 filename=/dev/nvme0n3 00:13:44.990 [job3] 00:13:44.990 filename=/dev/nvme0n4 00:13:44.990 Could not set queue depth (nvme0n1) 00:13:44.990 Could not set queue depth (nvme0n2) 00:13:44.990 Could not set queue depth (nvme0n3) 00:13:44.990 Could not set queue depth (nvme0n4) 00:13:45.249 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.249 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.249 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.249 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.249 fio-3.35 00:13:45.249 Starting 4 threads 00:13:46.655 00:13:46.655 job0: (groupid=0, jobs=1): err= 0: pid=3501263: Mon Jul 22 20:20:58 2024 00:13:46.655 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:13:46.655 slat (nsec): min=884, max=13881k, avg=81003.53, stdev=593377.31 00:13:46.655 clat (usec): min=2255, max=33432, avg=10573.09, stdev=4968.82 00:13:46.655 lat (usec): min=2264, max=35787, avg=10654.10, stdev=5015.13 00:13:46.655 clat percentiles (usec): 00:13:46.655 | 1.00th=[ 3425], 5.00th=[ 5604], 10.00th=[ 6587], 20.00th=[ 7373], 00:13:46.655 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8717], 60.00th=[ 9765], 00:13:46.655 | 70.00th=[10814], 80.00th=[13960], 90.00th=[16909], 95.00th=[20841], 00:13:46.655 | 99.00th=[28443], 99.50th=[31589], 99.90th=[32375], 99.95th=[32637], 00:13:46.655 | 99.99th=[33424] 00:13:46.655 write: IOPS=6201, BW=24.2MiB/s (25.4MB/s)(24.4MiB/1006msec); 0 zone resets 00:13:46.655 slat (nsec): min=1548, max=16030k, avg=74352.64, stdev=577290.95 00:13:46.655 clat (usec): min=1631, max=43572, avg=9884.72, stdev=5745.46 00:13:46.655 lat (usec): min=1642, max=43595, avg=9959.07, stdev=5787.00 00:13:46.655 clat percentiles (usec): 00:13:46.655 | 1.00th=[ 2573], 5.00th=[ 4555], 10.00th=[ 5407], 20.00th=[ 6456], 00:13:46.655 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 8029], 60.00th=[ 9372], 00:13:46.655 | 70.00th=[10421], 80.00th=[11863], 90.00th=[15270], 95.00th=[22414], 00:13:46.655 | 99.00th=[31065], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:46.655 | 99.99th=[43779] 00:13:46.655 bw ( KiB/s): min=23952, max=25200, per=27.73%, avg=24576.00, stdev=882.47, samples=2 00:13:46.655 iops : min= 5988, max= 6300, avg=6144.00, stdev=220.62, samples=2 00:13:46.655 lat (msec) : 2=0.06%, 4=2.30%, 10=62.46%, 20=28.85%, 50=6.32% 00:13:46.655 cpu : usr=3.98%, sys=5.27%, ctx=506, majf=0, minf=1 00:13:46.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:46.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:46.655 issued rwts: total=6144,6239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:46.655 job1: (groupid=0, jobs=1): err= 0: pid=3501270: Mon Jul 22 20:20:58 2024 00:13:46.655 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:13:46.655 slat (nsec): min=844, max=11144k, avg=69106.12, stdev=503355.87 00:13:46.655 clat (usec): min=2109, max=33666, avg=9656.94, stdev=4090.91 00:13:46.655 lat (usec): min=2114, max=39635, avg=9726.04, stdev=4127.44 00:13:46.655 clat percentiles (usec): 00:13:46.655 | 1.00th=[ 3916], 5.00th=[ 5145], 10.00th=[ 5997], 20.00th=[ 6849], 00:13:46.655 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 8094], 60.00th=[ 8848], 00:13:46.655 | 70.00th=[10290], 80.00th=[12649], 90.00th=[16057], 95.00th=[18482], 00:13:46.655 | 99.00th=[21103], 99.50th=[22938], 99.90th=[33817], 99.95th=[33817], 00:13:46.655 | 99.99th=[33817] 00:13:46.655 write: IOPS=6810, BW=26.6MiB/s (27.9MB/s)(26.7MiB/1003msec); 0 zone resets 00:13:46.655 slat (nsec): min=1446, max=17601k, avg=67195.84, stdev=495475.34 00:13:46.655 clat (usec): min=718, max=31782, avg=9222.64, stdev=5495.19 00:13:46.655 lat (usec): min=777, max=31789, avg=9289.84, stdev=5518.58 00:13:46.655 clat percentiles (usec): 00:13:46.655 | 1.00th=[ 2114], 5.00th=[ 3130], 10.00th=[ 4293], 20.00th=[ 5735], 00:13:46.655 | 30.00th=[ 6521], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 8029], 00:13:46.655 | 70.00th=[ 9503], 80.00th=[11207], 90.00th=[17171], 95.00th=[21103], 00:13:46.655 | 99.00th=[30540], 99.50th=[30540], 99.90th=[31065], 99.95th=[31851], 00:13:46.655 | 99.99th=[31851] 00:13:46.655 bw ( KiB/s): min=25400, max=28232, per=30.26%, avg=26816.00, stdev=2002.53, samples=2 00:13:46.655 iops : min= 6350, max= 7058, avg=6704.00, stdev=500.63, samples=2 00:13:46.655 lat (usec) : 750=0.01% 00:13:46.655 lat (msec) : 2=0.27%, 4=4.42%, 10=63.89%, 20=27.49%, 50=3.93% 00:13:46.656 cpu : usr=4.09%, sys=5.69%, ctx=563, majf=0, minf=1 00:13:46.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:46.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:46.656 issued rwts: total=6656,6831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:46.656 job2: (groupid=0, jobs=1): err= 0: pid=3501281: Mon Jul 22 20:20:58 2024 00:13:46.656 read: IOPS=4134, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1006msec) 00:13:46.656 slat (nsec): min=888, max=29260k, avg=110666.33, stdev=874556.07 00:13:46.656 clat (usec): min=2216, max=78432, avg=14300.96, stdev=11478.60 00:13:46.656 lat (usec): min=4029, max=78441, avg=14411.62, stdev=11550.71 00:13:46.656 clat percentiles (usec): 00:13:46.656 | 1.00th=[ 5276], 5.00th=[ 7242], 10.00th=[ 8291], 20.00th=[ 8848], 00:13:46.656 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[11076], 00:13:46.656 | 70.00th=[12518], 80.00th=[15270], 90.00th=[25822], 95.00th=[34866], 00:13:46.656 | 99.00th=[67634], 99.50th=[73925], 99.90th=[78119], 99.95th=[78119], 00:13:46.656 | 99.99th=[78119] 00:13:46.656 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:13:46.656 slat (nsec): min=1485, max=21287k, avg=114004.26, stdev=726928.33 00:13:46.656 clat (usec): min=4198, max=90363, avg=14743.95, stdev=13854.57 00:13:46.656 lat (usec): min=4200, max=90373, avg=14857.95, stdev=13944.20 00:13:46.656 clat percentiles (usec): 00:13:46.656 | 1.00th=[ 5014], 5.00th=[ 5866], 10.00th=[ 7504], 20.00th=[ 7832], 00:13:46.656 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 9503], 60.00th=[11338], 00:13:46.656 | 70.00th=[14222], 80.00th=[19006], 90.00th=[22414], 95.00th=[39584], 00:13:46.656 | 99.00th=[84411], 99.50th=[86508], 99.90th=[90702], 99.95th=[90702], 00:13:46.656 | 99.99th=[90702] 00:13:46.656 bw ( KiB/s): min=15864, max=20480, per=20.51%, avg=18172.00, stdev=3264.00, samples=2 00:13:46.656 iops : min= 3966, max= 5120, avg=4543.00, stdev=816.00, samples=2 00:13:46.656 lat (msec) : 4=0.01%, 10=49.26%, 20=34.26%, 50=13.00%, 100=3.46% 00:13:46.656 cpu : usr=2.29%, sys=3.38%, ctx=451, majf=0, minf=1 00:13:46.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:46.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:46.656 issued rwts: total=4159,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:46.656 job3: (groupid=0, jobs=1): err= 0: pid=3501288: Mon Jul 22 20:20:58 2024 00:13:46.656 read: IOPS=4360, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1004msec) 00:13:46.656 slat (nsec): min=900, max=11557k, avg=107956.86, stdev=776270.92 00:13:46.656 clat (usec): min=1401, max=58820, avg=13485.40, stdev=4494.11 00:13:46.656 lat (usec): min=4152, max=58826, avg=13593.35, stdev=4549.81 00:13:46.656 clat percentiles (usec): 00:13:46.656 | 1.00th=[ 4686], 5.00th=[ 7767], 10.00th=[ 8586], 20.00th=[10421], 00:13:46.656 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12256], 60.00th=[13042], 00:13:46.656 | 70.00th=[14353], 80.00th=[16909], 90.00th=[20055], 95.00th=[21890], 00:13:46.656 | 99.00th=[25297], 99.50th=[29230], 99.90th=[31851], 99.95th=[32113], 00:13:46.656 | 99.99th=[58983] 00:13:46.656 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:13:46.656 slat (nsec): min=1512, max=15020k, avg=107362.94, stdev=671040.47 00:13:46.656 clat (usec): min=1039, max=44799, avg=14824.32, stdev=8781.89 00:13:46.656 lat (usec): min=1056, max=44807, avg=14931.68, stdev=8835.89 00:13:46.656 clat percentiles (usec): 00:13:46.656 | 1.00th=[ 2147], 5.00th=[ 5407], 10.00th=[ 6652], 20.00th=[ 8356], 00:13:46.656 | 30.00th=[ 9634], 40.00th=[11338], 50.00th=[12387], 60.00th=[13829], 00:13:46.656 | 70.00th=[16581], 80.00th=[20841], 90.00th=[25560], 95.00th=[36439], 00:13:46.656 | 99.00th=[41681], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:13:46.656 | 99.99th=[44827] 00:13:46.656 bw ( KiB/s): min=16624, max=20240, per=20.80%, avg=18432.00, stdev=2556.90, samples=2 00:13:46.656 iops : min= 4156, max= 5060, avg=4608.00, stdev=639.22, samples=2 00:13:46.656 lat (msec) : 2=0.49%, 4=0.87%, 10=23.17%, 20=59.73%, 50=15.74% 00:13:46.656 lat (msec) : 100=0.01% 00:13:46.656 cpu : usr=2.69%, sys=4.59%, ctx=377, majf=0, minf=1 00:13:46.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:46.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:46.656 issued rwts: total=4378,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:46.656 00:13:46.656 Run status group 0 (all jobs): 00:13:46.656 READ: bw=82.8MiB/s (86.9MB/s), 16.1MiB/s-25.9MiB/s (16.9MB/s-27.2MB/s), io=83.3MiB (87.4MB), run=1003-1006msec 00:13:46.656 WRITE: bw=86.5MiB/s (90.7MB/s), 17.9MiB/s-26.6MiB/s (18.8MB/s-27.9MB/s), io=87.1MiB (91.3MB), run=1003-1006msec 00:13:46.656 00:13:46.656 Disk stats (read/write): 00:13:46.656 nvme0n1: ios=5274/5632, merge=0/0, ticks=34276/33710, in_queue=67986, util=99.10% 00:13:46.656 nvme0n2: ios=5656/6020, merge=0/0, ticks=34140/35845, in_queue=69985, util=90.19% 00:13:46.656 nvme0n3: ios=3476/3584, merge=0/0, ticks=17981/17150, in_queue=35131, util=94.29% 00:13:46.656 nvme0n4: ios=3611/3641, merge=0/0, ticks=42536/53503, in_queue=96039, util=92.09% 00:13:46.656 20:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:46.656 20:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3501580 00:13:46.656 20:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:46.656 20:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:46.656 [global] 00:13:46.656 thread=1 00:13:46.656 invalidate=1 00:13:46.656 rw=read 00:13:46.656 time_based=1 00:13:46.656 runtime=10 00:13:46.656 ioengine=libaio 00:13:46.656 direct=1 00:13:46.656 bs=4096 00:13:46.656 iodepth=1 00:13:46.656 norandommap=1 00:13:46.656 numjobs=1 00:13:46.656 00:13:46.656 [job0] 00:13:46.656 filename=/dev/nvme0n1 00:13:46.656 [job1] 00:13:46.656 filename=/dev/nvme0n2 00:13:46.656 [job2] 00:13:46.656 filename=/dev/nvme0n3 00:13:46.656 [job3] 00:13:46.656 filename=/dev/nvme0n4 00:13:46.656 Could not set queue depth (nvme0n1) 00:13:46.656 Could not set queue depth (nvme0n2) 00:13:46.656 Could not set queue depth (nvme0n3) 00:13:46.656 Could not set queue depth (nvme0n4) 00:13:46.916 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.916 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.916 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.916 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.916 fio-3.35 00:13:46.916 Starting 4 threads 00:13:49.464 20:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:49.464 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3096576, buflen=4096 00:13:49.464 fio: pid=3501814, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:49.464 20:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:49.725 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=3043328, buflen=4096 00:13:49.725 fio: pid=3501807, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:49.725 20:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:49.725 20:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:49.986 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=9592832, buflen=4096 00:13:49.986 fio: pid=3501791, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:49.986 20:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:49.986 20:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:50.247 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=3678208, buflen=4096 00:13:50.247 fio: pid=3501798, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:50.247 20:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:50.247 20:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:50.247 00:13:50.247 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3501791: Mon Jul 22 20:21:02 2024 00:13:50.247 read: IOPS=798, BW=3193KiB/s (3270kB/s)(9368KiB/2934msec) 00:13:50.247 slat (usec): min=7, max=13774, avg=34.63, stdev=303.24 00:13:50.247 clat (usec): min=841, max=3376, avg=1202.32, stdev=85.44 00:13:50.247 lat (usec): min=853, max=15025, avg=1236.96, stdev=316.20 00:13:50.247 clat percentiles (usec): 00:13:50.247 | 1.00th=[ 1029], 5.00th=[ 1090], 10.00th=[ 1123], 20.00th=[ 1156], 00:13:50.247 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1221], 00:13:50.247 | 70.00th=[ 1237], 80.00th=[ 1254], 90.00th=[ 1270], 95.00th=[ 1303], 00:13:50.247 | 99.00th=[ 1369], 99.50th=[ 1401], 99.90th=[ 1860], 99.95th=[ 2073], 00:13:50.247 | 99.99th=[ 3392] 00:13:50.247 bw ( KiB/s): min= 3208, max= 3312, per=54.35%, avg=3248.00, stdev=41.18, samples=5 00:13:50.247 iops : min= 802, max= 828, avg=812.00, stdev=10.30, samples=5 00:13:50.247 lat (usec) : 1000=0.43% 00:13:50.247 lat (msec) : 2=99.45%, 4=0.09% 00:13:50.247 cpu : usr=1.36%, sys=3.27%, ctx=2345, majf=0, minf=1 00:13:50.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.247 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.247 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.247 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3501798: Mon Jul 22 20:21:02 2024 00:13:50.247 read: IOPS=283, BW=1132KiB/s (1160kB/s)(3592KiB/3172msec) 00:13:50.247 slat (usec): min=7, max=13882, avg=71.22, stdev=714.97 00:13:50.247 clat (usec): min=736, max=57060, avg=3453.61, stdev=9666.51 00:13:50.247 lat (usec): min=760, max=57084, avg=3517.17, stdev=9680.48 00:13:50.247 clat percentiles (usec): 00:13:50.247 | 1.00th=[ 799], 5.00th=[ 881], 10.00th=[ 930], 20.00th=[ 988], 00:13:50.247 | 30.00th=[ 1012], 40.00th=[ 1037], 50.00th=[ 1045], 60.00th=[ 1074], 00:13:50.247 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[41157], 00:13:50.247 | 99.00th=[41681], 99.50th=[42206], 99.90th=[56886], 99.95th=[56886], 00:13:50.247 | 99.99th=[56886] 00:13:50.247 bw ( KiB/s): min= 424, max= 2776, per=19.28%, avg=1152.50, stdev=1025.97, samples=6 00:13:50.247 iops : min= 106, max= 694, avg=288.00, stdev=256.58, samples=6 00:13:50.247 lat (usec) : 750=0.33%, 1000=25.25% 00:13:50.247 lat (msec) : 2=68.41%, 50=5.78%, 100=0.11% 00:13:50.247 cpu : usr=0.38%, sys=0.95%, ctx=903, majf=0, minf=1 00:13:50.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.247 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.247 issued rwts: total=899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.247 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3501807: Mon Jul 22 20:21:02 2024 00:13:50.247 read: IOPS=269, BW=1076KiB/s (1101kB/s)(2972KiB/2763msec) 00:13:50.247 slat (usec): min=6, max=134, avg=23.03, stdev= 8.11 00:13:50.247 clat (usec): min=342, max=45895, avg=3661.98, stdev=10366.40 00:13:50.247 lat (usec): min=349, max=45926, avg=3685.01, stdev=10367.14 00:13:50.247 clat percentiles (usec): 00:13:50.247 | 1.00th=[ 603], 5.00th=[ 701], 10.00th=[ 742], 20.00th=[ 783], 00:13:50.247 | 30.00th=[ 807], 40.00th=[ 840], 50.00th=[ 873], 60.00th=[ 889], 00:13:50.247 | 70.00th=[ 914], 80.00th=[ 938], 90.00th=[ 996], 95.00th=[41681], 00:13:50.247 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:13:50.247 | 99.99th=[45876] 00:13:50.247 bw ( KiB/s): min= 96, max= 3176, per=12.65%, avg=756.80, stdev=1353.96, samples=5 00:13:50.247 iops : min= 24, max= 794, avg=189.20, stdev=338.49, samples=5 00:13:50.247 lat (usec) : 500=0.27%, 750=11.96%, 1000=77.69% 00:13:50.247 lat (msec) : 2=3.09%, 50=6.85% 00:13:50.247 cpu : usr=0.40%, sys=0.62%, ctx=745, majf=0, minf=1 00:13:50.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.247 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.247 issued rwts: total=744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.247 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3501814: Mon Jul 22 20:21:02 2024 00:13:50.247 read: IOPS=292, BW=1168KiB/s (1196kB/s)(3024KiB/2590msec) 00:13:50.247 slat (nsec): min=6560, max=44354, avg=24630.73, stdev=3849.05 00:13:50.247 clat (usec): min=618, max=42033, avg=3367.06, stdev=9429.65 00:13:50.247 lat (usec): min=644, max=42058, avg=3391.69, stdev=9430.00 00:13:50.247 clat percentiles (usec): 00:13:50.247 | 1.00th=[ 725], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 930], 00:13:50.247 | 30.00th=[ 979], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1057], 00:13:50.247 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[41157], 00:13:50.247 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:50.247 | 99.99th=[42206] 00:13:50.247 bw ( KiB/s): min= 96, max= 3824, per=18.54%, avg=1108.80, stdev=1581.23, samples=5 00:13:50.247 iops : min= 24, max= 956, avg=277.20, stdev=395.31, samples=5 00:13:50.247 lat (usec) : 750=1.98%, 1000=35.40% 00:13:50.247 lat (msec) : 2=56.41%, 4=0.13%, 10=0.13%, 50=5.81% 00:13:50.247 cpu : usr=0.19%, sys=0.97%, ctx=757, majf=0, minf=2 00:13:50.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.247 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.247 issued rwts: total=757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.247 00:13:50.247 Run status group 0 (all jobs): 00:13:50.247 READ: bw=5976KiB/s (6119kB/s), 1076KiB/s-3193KiB/s (1101kB/s-3270kB/s), io=18.5MiB (19.4MB), run=2590-3172msec 00:13:50.247 00:13:50.247 Disk stats (read/write): 00:13:50.247 nvme0n1: ios=2281/0, merge=0/0, ticks=2477/0, in_queue=2477, util=94.19% 00:13:50.247 nvme0n2: ios=876/0, merge=0/0, ticks=2957/0, in_queue=2957, util=94.70% 00:13:50.247 nvme0n3: ios=570/0, merge=0/0, ticks=2562/0, in_queue=2562, util=95.99% 00:13:50.247 nvme0n4: ios=757/0, merge=0/0, ticks=2535/0, in_queue=2535, util=96.09% 00:13:50.247 20:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:50.247 20:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:50.508 20:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:50.508 20:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:50.770 20:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:50.770 20:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:51.030 20:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:51.030 20:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:51.291 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:51.291 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3501580 00:13:51.291 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:51.291 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.862 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.862 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:51.862 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:51.862 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.862 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:51.862 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.862 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:51.862 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:51.862 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:51.862 nvmf hotplug test: fio failed as expected 00:13:51.862 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.123 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:52.123 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:52.123 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:52.123 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:52.123 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:52.123 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:52.123 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:52.123 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:52.123 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:52.123 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:52.123 20:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:52.123 rmmod nvme_tcp 00:13:52.123 rmmod nvme_fabrics 00:13:52.123 rmmod nvme_keyring 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3498042 ']' 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3498042 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3498042 ']' 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3498042 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3498042 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3498042' 00:13:52.123 killing process with pid 3498042 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3498042 00:13:52.123 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3498042 00:13:53.065 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:53.066 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:53.066 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:53.066 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.066 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:53.066 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.066 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.066 20:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.609 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:55.609 00:13:55.609 real 0m30.327s 00:13:55.609 user 2m37.656s 00:13:55.609 sys 0m9.292s 00:13:55.609 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.609 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.609 ************************************ 00:13:55.609 END TEST nvmf_fio_target 00:13:55.609 ************************************ 00:13:55.609 20:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:55.609 20:21:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:55.609 20:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.609 20:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.609 20:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:55.609 ************************************ 00:13:55.609 START TEST nvmf_bdevio 00:13:55.609 ************************************ 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:55.610 * Looking for test storage... 00:13:55.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.610 20:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.199 20:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.199 20:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.199 20:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.199 20:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.199 20:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.199 20:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:02.199 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.199 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:02.200 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:02.200 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:02.200 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.200 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:14:02.462 00:14:02.462 --- 10.0.0.2 ping statistics --- 00:14:02.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.462 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:14:02.462 00:14:02.462 --- 10.0.0.1 ping statistics --- 00:14:02.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.462 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3507700 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3507700 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3507700 ']' 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.462 20:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.462 [2024-07-22 20:21:14.441027] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:02.462 [2024-07-22 20:21:14.441152] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.724 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.724 [2024-07-22 20:21:14.596868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.985 [2024-07-22 20:21:14.832934] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.985 [2024-07-22 20:21:14.833003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.985 [2024-07-22 20:21:14.833018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.985 [2024-07-22 20:21:14.833028] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.985 [2024-07-22 20:21:14.833040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.985 [2024-07-22 20:21:14.833312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:02.985 [2024-07-22 20:21:14.833514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:02.985 [2024-07-22 20:21:14.833649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.985 [2024-07-22 20:21:14.833674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 [2024-07-22 20:21:15.253635] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.247 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.509 Malloc0 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.509 [2024-07-22 20:21:15.342175] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:03.509 { 00:14:03.509 "params": { 00:14:03.509 "name": "Nvme$subsystem", 00:14:03.509 "trtype": "$TEST_TRANSPORT", 00:14:03.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:03.509 "adrfam": "ipv4", 00:14:03.509 "trsvcid": "$NVMF_PORT", 00:14:03.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:03.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:03.509 "hdgst": ${hdgst:-false}, 00:14:03.509 "ddgst": ${ddgst:-false} 00:14:03.509 }, 00:14:03.509 "method": "bdev_nvme_attach_controller" 00:14:03.509 } 00:14:03.509 EOF 00:14:03.509 )") 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:03.509 20:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:03.509 "params": { 00:14:03.509 "name": "Nvme1", 00:14:03.509 "trtype": "tcp", 00:14:03.509 "traddr": "10.0.0.2", 00:14:03.509 "adrfam": "ipv4", 00:14:03.509 "trsvcid": "4420", 00:14:03.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:03.509 "hdgst": false, 00:14:03.509 "ddgst": false 00:14:03.509 }, 00:14:03.509 "method": "bdev_nvme_attach_controller" 00:14:03.509 }' 00:14:03.509 [2024-07-22 20:21:15.432123] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:03.509 [2024-07-22 20:21:15.432252] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508051 ] 00:14:03.509 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.770 [2024-07-22 20:21:15.560747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:03.770 [2024-07-22 20:21:15.743548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.770 [2024-07-22 20:21:15.743633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.770 [2024-07-22 20:21:15.743637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.341 I/O targets: 00:14:04.341 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:04.341 00:14:04.341 00:14:04.341 CUnit - A unit testing framework for C - Version 2.1-3 00:14:04.341 http://cunit.sourceforge.net/ 00:14:04.341 00:14:04.341 00:14:04.341 Suite: bdevio tests on: Nvme1n1 00:14:04.341 Test: blockdev write read block ...passed 00:14:04.341 Test: blockdev write zeroes read block ...passed 00:14:04.341 Test: blockdev write zeroes read no split ...passed 00:14:04.341 Test: blockdev write zeroes read split ...passed 00:14:04.341 Test: blockdev write zeroes read split partial ...passed 00:14:04.341 Test: blockdev reset ...[2024-07-22 20:21:16.294300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:04.341 [2024-07-22 20:21:16.294412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000389080 (9): Bad file descriptor 00:14:04.341 [2024-07-22 20:21:16.355836] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:04.341 passed 00:14:04.601 Test: blockdev write read 8 blocks ...passed 00:14:04.601 Test: blockdev write read size > 128k ...passed 00:14:04.601 Test: blockdev write read invalid size ...passed 00:14:04.601 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:04.601 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:04.601 Test: blockdev write read max offset ...passed 00:14:04.601 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:04.601 Test: blockdev writev readv 8 blocks ...passed 00:14:04.601 Test: blockdev writev readv 30 x 1block ...passed 00:14:04.601 Test: blockdev writev readv block ...passed 00:14:04.601 Test: blockdev writev readv size > 128k ...passed 00:14:04.601 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:04.601 Test: blockdev comparev and writev ...[2024-07-22 20:21:16.623387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.601 [2024-07-22 20:21:16.623422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:04.602 [2024-07-22 20:21:16.623438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.602 [2024-07-22 20:21:16.623447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:04.863 [2024-07-22 20:21:16.623892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.863 [2024-07-22 20:21:16.623907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:04.863 [2024-07-22 20:21:16.623919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.863 [2024-07-22 20:21:16.623927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:04.863 [2024-07-22 20:21:16.624390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.863 [2024-07-22 20:21:16.624404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:04.863 [2024-07-22 20:21:16.624416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.863 [2024-07-22 20:21:16.624424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:04.863 [2024-07-22 20:21:16.624830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.863 [2024-07-22 20:21:16.624842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:04.863 [2024-07-22 20:21:16.624855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.863 [2024-07-22 20:21:16.624867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:04.863 passed 00:14:04.863 Test: blockdev nvme passthru rw ...passed 00:14:04.863 Test: blockdev nvme passthru vendor specific ...[2024-07-22 20:21:16.709921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.863 [2024-07-22 20:21:16.709944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:04.863 [2024-07-22 20:21:16.710239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.863 [2024-07-22 20:21:16.710255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:04.863 [2024-07-22 20:21:16.710515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.863 [2024-07-22 20:21:16.710525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:04.863 [2024-07-22 20:21:16.710802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.863 [2024-07-22 20:21:16.710812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:04.863 passed 00:14:04.863 Test: blockdev nvme admin passthru ...passed 00:14:04.863 Test: blockdev copy ...passed 00:14:04.863 00:14:04.863 Run Summary: Type Total Ran Passed Failed Inactive 00:14:04.863 suites 1 1 n/a 0 0 00:14:04.863 tests 23 23 23 0 0 00:14:04.863 asserts 152 152 152 0 n/a 00:14:04.863 00:14:04.863 Elapsed time = 1.406 seconds 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.805 rmmod nvme_tcp 00:14:05.805 rmmod nvme_fabrics 00:14:05.805 rmmod nvme_keyring 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3507700 ']' 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3507700 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3507700 ']' 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3507700 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3507700 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3507700' 00:14:05.805 killing process with pid 3507700 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3507700 00:14:05.805 20:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3507700 00:14:06.377 20:21:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.377 20:21:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.377 20:21:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.377 20:21:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.377 20:21:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.377 20:21:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.377 20:21:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.377 20:21:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:08.957 00:14:08.957 real 0m13.297s 00:14:08.957 user 0m19.275s 00:14:08.957 sys 0m6.108s 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:08.957 ************************************ 00:14:08.957 END TEST nvmf_bdevio 00:14:08.957 ************************************ 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:08.957 00:14:08.957 real 5m10.808s 00:14:08.957 user 12m15.996s 00:14:08.957 sys 1m45.607s 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:08.957 ************************************ 00:14:08.957 END TEST nvmf_target_core 00:14:08.957 ************************************ 00:14:08.957 20:21:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:08.957 20:21:20 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:08.957 20:21:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:08.957 20:21:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.957 20:21:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.957 ************************************ 00:14:08.957 START TEST nvmf_target_extra 00:14:08.957 ************************************ 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:08.957 * Looking for test storage... 00:14:08.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.957 ************************************ 00:14:08.957 START TEST nvmf_example 00:14:08.957 ************************************ 00:14:08.957 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:08.957 * Looking for test storage... 00:14:08.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:14:08.958 20:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:17.096 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:17.096 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:17.096 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:17.096 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:14:17.096 00:14:17.096 --- 10.0.0.2 ping statistics --- 00:14:17.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.096 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:14:17.096 00:14:17.096 --- 10.0.0.1 ping statistics --- 00:14:17.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.096 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3512769 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3512769 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3512769 ']' 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.096 20:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:14:17.096 20:21:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:17.096 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.095 Initializing NVMe Controllers 00:14:27.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:27.095 Initialization complete. Launching workers. 00:14:27.095 ======================================================== 00:14:27.095 Latency(us) 00:14:27.095 Device Information : IOPS MiB/s Average min max 00:14:27.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16425.70 64.16 3897.61 973.10 23994.80 00:14:27.095 ======================================================== 00:14:27.095 Total : 16425.70 64.16 3897.61 973.10 23994.80 00:14:27.095 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:27.356 rmmod nvme_tcp 00:14:27.356 rmmod nvme_fabrics 00:14:27.356 rmmod nvme_keyring 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3512769 ']' 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3512769 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3512769 ']' 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3512769 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3512769 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3512769' 00:14:27.356 killing process with pid 3512769 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 3512769 00:14:27.356 20:21:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 3512769 00:14:28.299 nvmf threads initialize successfully 00:14:28.299 bdev subsystem init successfully 00:14:28.299 created a nvmf target service 00:14:28.299 create targets's poll groups done 00:14:28.299 all subsystems of target started 00:14:28.299 nvmf target is running 00:14:28.299 all subsystems of target stopped 00:14:28.299 destroy targets's poll groups done 00:14:28.299 destroyed the nvmf target service 00:14:28.299 bdev subsystem finish successfully 00:14:28.299 nvmf threads destroy successfully 00:14:28.299 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:28.299 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:28.299 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:28.299 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.299 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.299 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.299 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.299 20:21:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.214 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:30.214 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:30.214 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.214 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:30.477 00:14:30.477 real 0m21.566s 00:14:30.477 user 0m48.176s 00:14:30.477 sys 0m6.532s 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:30.477 ************************************ 00:14:30.477 END TEST nvmf_example 00:14:30.477 ************************************ 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.477 ************************************ 00:14:30.477 START TEST nvmf_filesystem 00:14:30.477 ************************************ 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:30.477 * Looking for test storage... 00:14:30.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:14:30.477 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:30.478 #define SPDK_CONFIG_H 00:14:30.478 #define SPDK_CONFIG_APPS 1 00:14:30.478 #define SPDK_CONFIG_ARCH native 00:14:30.478 #define SPDK_CONFIG_ASAN 1 00:14:30.478 #undef SPDK_CONFIG_AVAHI 00:14:30.478 #undef SPDK_CONFIG_CET 00:14:30.478 #define SPDK_CONFIG_COVERAGE 1 00:14:30.478 #define SPDK_CONFIG_CROSS_PREFIX 00:14:30.478 #undef SPDK_CONFIG_CRYPTO 00:14:30.478 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:30.478 #undef SPDK_CONFIG_CUSTOMOCF 00:14:30.478 #undef SPDK_CONFIG_DAOS 00:14:30.478 #define SPDK_CONFIG_DAOS_DIR 00:14:30.478 #define SPDK_CONFIG_DEBUG 1 00:14:30.478 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:30.478 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:30.478 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:30.478 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:30.478 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:30.478 #undef SPDK_CONFIG_DPDK_UADK 00:14:30.478 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:30.478 #define SPDK_CONFIG_EXAMPLES 1 00:14:30.478 #undef SPDK_CONFIG_FC 00:14:30.478 #define SPDK_CONFIG_FC_PATH 00:14:30.478 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:30.478 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:30.478 #undef SPDK_CONFIG_FUSE 00:14:30.478 #undef SPDK_CONFIG_FUZZER 00:14:30.478 #define SPDK_CONFIG_FUZZER_LIB 00:14:30.478 #undef SPDK_CONFIG_GOLANG 00:14:30.478 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:30.478 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:30.478 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:30.478 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:30.478 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:30.478 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:30.478 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:30.478 #define SPDK_CONFIG_IDXD 1 00:14:30.478 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:30.478 #undef SPDK_CONFIG_IPSEC_MB 00:14:30.478 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:30.478 #define SPDK_CONFIG_ISAL 1 00:14:30.478 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:30.478 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:30.478 #define SPDK_CONFIG_LIBDIR 00:14:30.478 #undef SPDK_CONFIG_LTO 00:14:30.478 #define SPDK_CONFIG_MAX_LCORES 128 00:14:30.478 #define SPDK_CONFIG_NVME_CUSE 1 00:14:30.478 #undef SPDK_CONFIG_OCF 00:14:30.478 #define SPDK_CONFIG_OCF_PATH 00:14:30.478 #define SPDK_CONFIG_OPENSSL_PATH 00:14:30.478 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:30.478 #define SPDK_CONFIG_PGO_DIR 00:14:30.478 #undef SPDK_CONFIG_PGO_USE 00:14:30.478 #define SPDK_CONFIG_PREFIX /usr/local 00:14:30.478 #undef SPDK_CONFIG_RAID5F 00:14:30.478 #undef SPDK_CONFIG_RBD 00:14:30.478 #define SPDK_CONFIG_RDMA 1 00:14:30.478 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:30.478 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:30.478 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:30.478 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:30.478 #define SPDK_CONFIG_SHARED 1 00:14:30.478 #undef SPDK_CONFIG_SMA 00:14:30.478 #define SPDK_CONFIG_TESTS 1 00:14:30.478 #undef SPDK_CONFIG_TSAN 00:14:30.478 #define SPDK_CONFIG_UBLK 1 00:14:30.478 #define SPDK_CONFIG_UBSAN 1 00:14:30.478 #undef SPDK_CONFIG_UNIT_TESTS 00:14:30.478 #undef SPDK_CONFIG_URING 00:14:30.478 #define SPDK_CONFIG_URING_PATH 00:14:30.478 #undef SPDK_CONFIG_URING_ZNS 00:14:30.478 #undef SPDK_CONFIG_USDT 00:14:30.478 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:30.478 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:30.478 #undef SPDK_CONFIG_VFIO_USER 00:14:30.478 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:30.478 #define SPDK_CONFIG_VHOST 1 00:14:30.478 #define SPDK_CONFIG_VIRTIO 1 00:14:30.478 #undef SPDK_CONFIG_VTUNE 00:14:30.478 #define SPDK_CONFIG_VTUNE_DIR 00:14:30.478 #define SPDK_CONFIG_WERROR 1 00:14:30.478 #define SPDK_CONFIG_WPDK_DIR 00:14:30.478 #undef SPDK_CONFIG_XNVME 00:14:30.478 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.478 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:30.479 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:30.480 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:30.481 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:14:30.481 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3515570 ]] 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3515570 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:14:30.743 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.rIGt0X 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rIGt0X/tests/target /tmp/spdk.rIGt0X 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118521577472 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370976256 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10849398784 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64674230272 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685486080 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=11255808 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25850851328 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=23347200 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684670976 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=819200 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:14:30.744 * Looking for test storage... 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118521577472 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=13063991296 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.744 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:30.745 20:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:37.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:37.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.334 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:37.335 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:37.335 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.335 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:37.596 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.596 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.596 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:37.596 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:37.596 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.596 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.596 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.596 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.596 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:37.596 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:37.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:14:37.856 00:14:37.856 --- 10.0.0.2 ping statistics --- 00:14:37.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.856 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:14:37.856 00:14:37.856 --- 10.0.0.1 ping statistics --- 00:14:37.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.856 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.856 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.856 ************************************ 00:14:37.857 START TEST nvmf_filesystem_no_in_capsule 00:14:37.857 ************************************ 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3519408 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3519408 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3519408 ']' 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.857 20:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.857 [2024-07-22 20:21:49.846453] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:37.857 [2024-07-22 20:21:49.846550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.117 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.117 [2024-07-22 20:21:49.968286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.378 [2024-07-22 20:21:50.161270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.378 [2024-07-22 20:21:50.161317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.378 [2024-07-22 20:21:50.161330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.378 [2024-07-22 20:21:50.161340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.378 [2024-07-22 20:21:50.161350] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.378 [2024-07-22 20:21:50.161468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.378 [2024-07-22 20:21:50.161468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.378 [2024-07-22 20:21:50.161623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.378 [2024-07-22 20:21:50.161651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.638 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:38.639 [2024-07-22 20:21:50.634913] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.639 20:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.210 Malloc1 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.210 [2024-07-22 20:21:51.066518] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.210 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.211 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:39.211 { 00:14:39.211 "name": "Malloc1", 00:14:39.211 "aliases": [ 00:14:39.211 "a756f820-86b1-44ed-9e51-8195954dc896" 00:14:39.211 ], 00:14:39.211 "product_name": "Malloc disk", 00:14:39.211 "block_size": 512, 00:14:39.211 "num_blocks": 1048576, 00:14:39.211 "uuid": "a756f820-86b1-44ed-9e51-8195954dc896", 00:14:39.211 "assigned_rate_limits": { 00:14:39.211 "rw_ios_per_sec": 0, 00:14:39.211 "rw_mbytes_per_sec": 0, 00:14:39.211 "r_mbytes_per_sec": 0, 00:14:39.211 "w_mbytes_per_sec": 0 00:14:39.211 }, 00:14:39.211 "claimed": true, 00:14:39.211 "claim_type": "exclusive_write", 00:14:39.211 "zoned": false, 00:14:39.211 "supported_io_types": { 00:14:39.211 "read": true, 00:14:39.211 "write": true, 00:14:39.211 "unmap": true, 00:14:39.211 "flush": true, 00:14:39.211 "reset": true, 00:14:39.211 "nvme_admin": false, 00:14:39.211 "nvme_io": false, 00:14:39.211 "nvme_io_md": false, 00:14:39.211 "write_zeroes": true, 00:14:39.211 "zcopy": true, 00:14:39.211 "get_zone_info": false, 00:14:39.211 "zone_management": false, 00:14:39.211 "zone_append": false, 00:14:39.211 "compare": false, 00:14:39.211 "compare_and_write": false, 00:14:39.211 "abort": true, 00:14:39.211 "seek_hole": false, 00:14:39.211 "seek_data": false, 00:14:39.211 "copy": true, 00:14:39.211 "nvme_iov_md": false 00:14:39.211 }, 00:14:39.211 "memory_domains": [ 00:14:39.211 { 00:14:39.211 "dma_device_id": "system", 00:14:39.211 "dma_device_type": 1 00:14:39.211 }, 00:14:39.211 { 00:14:39.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.211 "dma_device_type": 2 00:14:39.211 } 00:14:39.211 ], 00:14:39.211 "driver_specific": {} 00:14:39.211 } 00:14:39.211 ]' 00:14:39.211 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:39.211 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:39.211 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:39.211 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:39.211 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:39.211 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:39.211 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:39.211 20:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:41.158 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:41.158 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:41.158 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.158 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:41.158 20:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:43.101 20:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:43.101 20:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:44.043 20:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:44.984 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:44.984 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:44.984 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:44.984 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.984 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:44.984 ************************************ 00:14:44.984 START TEST filesystem_ext4 00:14:44.984 ************************************ 00:14:44.985 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:44.985 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:44.985 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:44.985 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:44.985 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:14:44.985 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:44.985 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:14:44.985 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:14:44.985 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:14:44.985 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:14:44.985 20:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:44.985 mke2fs 1.46.5 (30-Dec-2021) 00:14:44.985 Discarding device blocks: 0/522240 done 00:14:44.985 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:44.985 Filesystem UUID: cd0b97a6-f7c6-4b85-b295-cc55edd5c230 00:14:44.985 Superblock backups stored on blocks: 00:14:44.985 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:44.985 00:14:44.985 Allocating group tables: 0/64 done 00:14:44.985 Writing inode tables: 0/64 done 00:14:45.245 Creating journal (8192 blocks): done 00:14:46.186 Writing superblocks and filesystem accounting information: 0/64 done 00:14:46.186 00:14:46.186 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:14:46.186 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3519408 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:46.446 00:14:46.446 real 0m1.558s 00:14:46.446 user 0m0.032s 00:14:46.446 sys 0m0.064s 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:46.446 ************************************ 00:14:46.446 END TEST filesystem_ext4 00:14:46.446 ************************************ 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.446 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:46.706 ************************************ 00:14:46.706 START TEST filesystem_btrfs 00:14:46.706 ************************************ 00:14:46.706 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:46.706 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:46.706 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:46.706 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:46.706 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:14:46.706 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:46.706 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:14:46.706 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:14:46.706 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:14:46.706 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:14:46.706 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:46.966 btrfs-progs v6.6.2 00:14:46.967 See https://btrfs.readthedocs.io for more information. 00:14:46.967 00:14:46.967 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:46.967 NOTE: several default settings have changed in version 5.15, please make sure 00:14:46.967 this does not affect your deployments: 00:14:46.967 - DUP for metadata (-m dup) 00:14:46.967 - enabled no-holes (-O no-holes) 00:14:46.967 - enabled free-space-tree (-R free-space-tree) 00:14:46.967 00:14:46.967 Label: (null) 00:14:46.967 UUID: 34955772-f784-424d-a831-7799e842bf84 00:14:46.967 Node size: 16384 00:14:46.967 Sector size: 4096 00:14:46.967 Filesystem size: 510.00MiB 00:14:46.967 Block group profiles: 00:14:46.967 Data: single 8.00MiB 00:14:46.967 Metadata: DUP 32.00MiB 00:14:46.967 System: DUP 8.00MiB 00:14:46.967 SSD detected: yes 00:14:46.967 Zoned device: no 00:14:46.967 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:46.967 Runtime features: free-space-tree 00:14:46.967 Checksum: crc32c 00:14:46.967 Number of devices: 1 00:14:46.967 Devices: 00:14:46.967 ID SIZE PATH 00:14:46.967 1 510.00MiB /dev/nvme0n1p1 00:14:46.967 00:14:46.967 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:14:46.967 20:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3519408 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:47.227 00:14:47.227 real 0m0.671s 00:14:47.227 user 0m0.032s 00:14:47.227 sys 0m0.129s 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:47.227 ************************************ 00:14:47.227 END TEST filesystem_btrfs 00:14:47.227 ************************************ 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.227 ************************************ 00:14:47.227 START TEST filesystem_xfs 00:14:47.227 ************************************ 00:14:47.227 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:14:47.228 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:47.228 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:47.228 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:47.228 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:14:47.228 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:47.228 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:14:47.228 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:14:47.228 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:14:47.228 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:14:47.228 20:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:47.488 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:47.488 = sectsz=512 attr=2, projid32bit=1 00:14:47.488 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:47.488 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:47.488 data = bsize=4096 blocks=130560, imaxpct=25 00:14:47.488 = sunit=0 swidth=0 blks 00:14:47.488 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:47.488 log =internal log bsize=4096 blocks=16384, version=2 00:14:47.488 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:47.488 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:48.059 Discarding blocks...Done. 00:14:48.059 20:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:14:48.059 20:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:50.600 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:50.600 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:50.600 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:50.600 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:50.600 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:50.600 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:50.860 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3519408 00:14:50.860 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:50.860 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:50.860 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:50.860 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:50.860 00:14:50.860 real 0m3.445s 00:14:50.860 user 0m0.028s 00:14:50.860 sys 0m0.076s 00:14:50.860 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:50.860 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:50.860 ************************************ 00:14:50.860 END TEST filesystem_xfs 00:14:50.860 ************************************ 00:14:50.860 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:50.860 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:50.860 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:50.860 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.121 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:51.121 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:51.121 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:51.121 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.121 20:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3519408 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3519408 ']' 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3519408 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3519408 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3519408' 00:14:51.121 killing process with pid 3519408 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3519408 00:14:51.121 20:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3519408 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:53.032 00:14:53.032 real 0m15.102s 00:14:53.032 user 0m57.859s 00:14:53.032 sys 0m1.338s 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:53.032 ************************************ 00:14:53.032 END TEST nvmf_filesystem_no_in_capsule 00:14:53.032 ************************************ 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:53.032 ************************************ 00:14:53.032 START TEST nvmf_filesystem_in_capsule 00:14:53.032 ************************************ 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3522478 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3522478 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3522478 ']' 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.032 20:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:53.032 [2024-07-22 20:22:05.046909] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:53.032 [2024-07-22 20:22:05.047040] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.292 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.292 [2024-07-22 20:22:05.184734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.553 [2024-07-22 20:22:05.374662] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.553 [2024-07-22 20:22:05.374706] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.553 [2024-07-22 20:22:05.374719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.553 [2024-07-22 20:22:05.374729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.553 [2024-07-22 20:22:05.374739] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.553 [2024-07-22 20:22:05.374911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.553 [2024-07-22 20:22:05.374998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.553 [2024-07-22 20:22:05.375132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.553 [2024-07-22 20:22:05.375158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.814 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.814 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:14:53.814 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:53.814 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.814 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:53.814 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.814 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:53.814 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:53.814 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.814 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:53.814 [2024-07-22 20:22:05.827793] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.074 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.074 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:54.074 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.074 20:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:54.335 Malloc1 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:54.335 [2024-07-22 20:22:06.249302] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.335 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:54.336 { 00:14:54.336 "name": "Malloc1", 00:14:54.336 "aliases": [ 00:14:54.336 "b8f184e3-986e-4191-a278-c05e261da337" 00:14:54.336 ], 00:14:54.336 "product_name": "Malloc disk", 00:14:54.336 "block_size": 512, 00:14:54.336 "num_blocks": 1048576, 00:14:54.336 "uuid": "b8f184e3-986e-4191-a278-c05e261da337", 00:14:54.336 "assigned_rate_limits": { 00:14:54.336 "rw_ios_per_sec": 0, 00:14:54.336 "rw_mbytes_per_sec": 0, 00:14:54.336 "r_mbytes_per_sec": 0, 00:14:54.336 "w_mbytes_per_sec": 0 00:14:54.336 }, 00:14:54.336 "claimed": true, 00:14:54.336 "claim_type": "exclusive_write", 00:14:54.336 "zoned": false, 00:14:54.336 "supported_io_types": { 00:14:54.336 "read": true, 00:14:54.336 "write": true, 00:14:54.336 "unmap": true, 00:14:54.336 "flush": true, 00:14:54.336 "reset": true, 00:14:54.336 "nvme_admin": false, 00:14:54.336 "nvme_io": false, 00:14:54.336 "nvme_io_md": false, 00:14:54.336 "write_zeroes": true, 00:14:54.336 "zcopy": true, 00:14:54.336 "get_zone_info": false, 00:14:54.336 "zone_management": false, 00:14:54.336 "zone_append": false, 00:14:54.336 "compare": false, 00:14:54.336 "compare_and_write": false, 00:14:54.336 "abort": true, 00:14:54.336 "seek_hole": false, 00:14:54.336 "seek_data": false, 00:14:54.336 "copy": true, 00:14:54.336 "nvme_iov_md": false 00:14:54.336 }, 00:14:54.336 "memory_domains": [ 00:14:54.336 { 00:14:54.336 "dma_device_id": "system", 00:14:54.336 "dma_device_type": 1 00:14:54.336 }, 00:14:54.336 { 00:14:54.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.336 "dma_device_type": 2 00:14:54.336 } 00:14:54.336 ], 00:14:54.336 "driver_specific": {} 00:14:54.336 } 00:14:54.336 ]' 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:54.336 20:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:56.250 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:56.250 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:56.250 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.250 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:56.250 20:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:58.161 20:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:58.420 20:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:58.681 20:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.067 ************************************ 00:15:00.067 START TEST filesystem_in_capsule_ext4 00:15:00.067 ************************************ 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:15:00.067 20:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:00.067 mke2fs 1.46.5 (30-Dec-2021) 00:15:00.067 Discarding device blocks: 0/522240 done 00:15:00.067 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:00.067 Filesystem UUID: f584cf32-8fbd-45b6-b2a6-614053f6fbea 00:15:00.067 Superblock backups stored on blocks: 00:15:00.067 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:00.067 00:15:00.067 Allocating group tables: 0/64 done 00:15:00.067 Writing inode tables: 0/64 done 00:15:00.067 Creating journal (8192 blocks): done 00:15:01.010 Writing superblocks and filesystem accounting information: 0/64 done 00:15:01.010 00:15:01.010 20:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:15:01.010 20:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3522478 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:01.583 00:15:01.583 real 0m1.862s 00:15:01.583 user 0m0.030s 00:15:01.583 sys 0m0.069s 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:01.583 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:01.583 ************************************ 00:15:01.583 END TEST filesystem_in_capsule_ext4 00:15:01.583 ************************************ 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.881 ************************************ 00:15:01.881 START TEST filesystem_in_capsule_btrfs 00:15:01.881 ************************************ 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:15:01.881 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:02.167 btrfs-progs v6.6.2 00:15:02.167 See https://btrfs.readthedocs.io for more information. 00:15:02.167 00:15:02.167 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:02.167 NOTE: several default settings have changed in version 5.15, please make sure 00:15:02.167 this does not affect your deployments: 00:15:02.167 - DUP for metadata (-m dup) 00:15:02.167 - enabled no-holes (-O no-holes) 00:15:02.167 - enabled free-space-tree (-R free-space-tree) 00:15:02.167 00:15:02.167 Label: (null) 00:15:02.167 UUID: b1eee4a7-95f2-46ec-9f7a-cd45e08cbd25 00:15:02.167 Node size: 16384 00:15:02.167 Sector size: 4096 00:15:02.167 Filesystem size: 510.00MiB 00:15:02.167 Block group profiles: 00:15:02.167 Data: single 8.00MiB 00:15:02.167 Metadata: DUP 32.00MiB 00:15:02.167 System: DUP 8.00MiB 00:15:02.167 SSD detected: yes 00:15:02.167 Zoned device: no 00:15:02.167 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:15:02.167 Runtime features: free-space-tree 00:15:02.167 Checksum: crc32c 00:15:02.167 Number of devices: 1 00:15:02.167 Devices: 00:15:02.167 ID SIZE PATH 00:15:02.167 1 510.00MiB /dev/nvme0n1p1 00:15:02.167 00:15:02.167 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:15:02.167 20:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:02.740 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:02.740 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:02.740 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:02.740 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3522478 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:03.001 00:15:03.001 real 0m1.157s 00:15:03.001 user 0m0.027s 00:15:03.001 sys 0m0.136s 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:03.001 ************************************ 00:15:03.001 END TEST filesystem_in_capsule_btrfs 00:15:03.001 ************************************ 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:03.001 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:03.002 ************************************ 00:15:03.002 START TEST filesystem_in_capsule_xfs 00:15:03.002 ************************************ 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:15:03.002 20:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:03.002 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:03.002 = sectsz=512 attr=2, projid32bit=1 00:15:03.002 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:03.002 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:03.002 data = bsize=4096 blocks=130560, imaxpct=25 00:15:03.002 = sunit=0 swidth=0 blks 00:15:03.002 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:03.002 log =internal log bsize=4096 blocks=16384, version=2 00:15:03.002 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:03.002 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:04.388 Discarding blocks...Done. 00:15:04.388 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:15:04.388 20:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3522478 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:06.301 00:15:06.301 real 0m2.991s 00:15:06.301 user 0m0.032s 00:15:06.301 sys 0m0.072s 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:06.301 ************************************ 00:15:06.301 END TEST filesystem_in_capsule_xfs 00:15:06.301 ************************************ 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:15:06.301 20:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:06.301 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:06.872 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:07.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3522478 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3522478 ']' 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3522478 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3522478 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3522478' 00:15:07.134 killing process with pid 3522478 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3522478 00:15:07.134 20:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3522478 00:15:09.048 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:09.048 00:15:09.048 real 0m15.832s 00:15:09.048 user 1m0.581s 00:15:09.048 sys 0m1.453s 00:15:09.048 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.048 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:09.048 ************************************ 00:15:09.048 END TEST nvmf_filesystem_in_capsule 00:15:09.048 ************************************ 00:15:09.048 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:15:09.048 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:09.049 rmmod nvme_tcp 00:15:09.049 rmmod nvme_fabrics 00:15:09.049 rmmod nvme_keyring 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.049 20:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.963 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.963 00:15:10.963 real 0m40.661s 00:15:10.963 user 2m0.702s 00:15:10.963 sys 0m8.161s 00:15:10.963 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.963 20:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:10.963 ************************************ 00:15:10.963 END TEST nvmf_filesystem 00:15:10.963 ************************************ 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:11.225 ************************************ 00:15:11.225 START TEST nvmf_target_discovery 00:15:11.225 ************************************ 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:11.225 * Looking for test storage... 00:15:11.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:15:11.225 20:22:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:19.368 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:19.368 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.368 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:19.369 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:19.369 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:19.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:15:19.369 00:15:19.369 --- 10.0.0.2 ping statistics --- 00:15:19.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.369 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:15:19.369 00:15:19.369 --- 10.0.0.1 ping statistics --- 00:15:19.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.369 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3529968 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3529968 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3529968 ']' 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.369 20:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.369 [2024-07-22 20:22:30.491329] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:19.369 [2024-07-22 20:22:30.491431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.369 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.369 [2024-07-22 20:22:30.611895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.369 [2024-07-22 20:22:30.791068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.369 [2024-07-22 20:22:30.791115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.369 [2024-07-22 20:22:30.791128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.369 [2024-07-22 20:22:30.791137] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.369 [2024-07-22 20:22:30.791147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.369 [2024-07-22 20:22:30.791347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.369 [2024-07-22 20:22:30.791501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.369 [2024-07-22 20:22:30.791641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.369 [2024-07-22 20:22:30.791667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.369 [2024-07-22 20:22:31.275941] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.369 Null1 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.369 [2024-07-22 20:22:31.336331] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:19.369 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.370 Null2 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.370 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.630 Null3 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.630 Null4 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:19.630 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.631 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.631 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.631 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:19.631 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.631 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.631 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.631 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:19.631 00:15:19.631 Discovery Log Number of Records 6, Generation counter 6 00:15:19.631 =====Discovery Log Entry 0====== 00:15:19.631 trtype: tcp 00:15:19.631 adrfam: ipv4 00:15:19.631 subtype: current discovery subsystem 00:15:19.631 treq: not required 00:15:19.631 portid: 0 00:15:19.631 trsvcid: 4420 00:15:19.631 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:19.631 traddr: 10.0.0.2 00:15:19.631 eflags: explicit discovery connections, duplicate discovery information 00:15:19.631 sectype: none 00:15:19.631 =====Discovery Log Entry 1====== 00:15:19.631 trtype: tcp 00:15:19.631 adrfam: ipv4 00:15:19.631 subtype: nvme subsystem 00:15:19.631 treq: not required 00:15:19.631 portid: 0 00:15:19.631 trsvcid: 4420 00:15:19.631 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:19.631 traddr: 10.0.0.2 00:15:19.631 eflags: none 00:15:19.631 sectype: none 00:15:19.631 =====Discovery Log Entry 2====== 00:15:19.631 trtype: tcp 00:15:19.631 adrfam: ipv4 00:15:19.631 subtype: nvme subsystem 00:15:19.631 treq: not required 00:15:19.631 portid: 0 00:15:19.631 trsvcid: 4420 00:15:19.631 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:19.631 traddr: 10.0.0.2 00:15:19.631 eflags: none 00:15:19.631 sectype: none 00:15:19.631 =====Discovery Log Entry 3====== 00:15:19.631 trtype: tcp 00:15:19.631 adrfam: ipv4 00:15:19.631 subtype: nvme subsystem 00:15:19.631 treq: not required 00:15:19.631 portid: 0 00:15:19.631 trsvcid: 4420 00:15:19.631 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:19.631 traddr: 10.0.0.2 00:15:19.631 eflags: none 00:15:19.631 sectype: none 00:15:19.631 =====Discovery Log Entry 4====== 00:15:19.631 trtype: tcp 00:15:19.631 adrfam: ipv4 00:15:19.631 subtype: nvme subsystem 00:15:19.631 treq: not required 00:15:19.631 portid: 0 00:15:19.631 trsvcid: 4420 00:15:19.631 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:19.631 traddr: 10.0.0.2 00:15:19.631 eflags: none 00:15:19.631 sectype: none 00:15:19.631 =====Discovery Log Entry 5====== 00:15:19.631 trtype: tcp 00:15:19.631 adrfam: ipv4 00:15:19.631 subtype: discovery subsystem referral 00:15:19.631 treq: not required 00:15:19.631 portid: 0 00:15:19.631 trsvcid: 4430 00:15:19.631 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:19.631 traddr: 10.0.0.2 00:15:19.631 eflags: none 00:15:19.631 sectype: none 00:15:19.631 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:19.631 Perform nvmf subsystem discovery via RPC 00:15:19.631 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:19.631 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.631 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.631 [ 00:15:19.631 { 00:15:19.631 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:19.631 "subtype": "Discovery", 00:15:19.631 "listen_addresses": [ 00:15:19.631 { 00:15:19.631 "trtype": "TCP", 00:15:19.631 "adrfam": "IPv4", 00:15:19.631 "traddr": "10.0.0.2", 00:15:19.631 "trsvcid": "4420" 00:15:19.631 } 00:15:19.631 ], 00:15:19.631 "allow_any_host": true, 00:15:19.631 "hosts": [] 00:15:19.631 }, 00:15:19.631 { 00:15:19.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.631 "subtype": "NVMe", 00:15:19.631 "listen_addresses": [ 00:15:19.631 { 00:15:19.631 "trtype": "TCP", 00:15:19.631 "adrfam": "IPv4", 00:15:19.631 "traddr": "10.0.0.2", 00:15:19.631 "trsvcid": "4420" 00:15:19.631 } 00:15:19.631 ], 00:15:19.631 "allow_any_host": true, 00:15:19.631 "hosts": [], 00:15:19.631 "serial_number": "SPDK00000000000001", 00:15:19.631 "model_number": "SPDK bdev Controller", 00:15:19.631 "max_namespaces": 32, 00:15:19.631 "min_cntlid": 1, 00:15:19.631 "max_cntlid": 65519, 00:15:19.631 "namespaces": [ 00:15:19.631 { 00:15:19.631 "nsid": 1, 00:15:19.631 "bdev_name": "Null1", 00:15:19.631 "name": "Null1", 00:15:19.631 "nguid": "3D91D06C652047F1B6D47E0A78DD9228", 00:15:19.631 "uuid": "3d91d06c-6520-47f1-b6d4-7e0a78dd9228" 00:15:19.631 } 00:15:19.631 ] 00:15:19.631 }, 00:15:19.631 { 00:15:19.631 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:19.631 "subtype": "NVMe", 00:15:19.631 "listen_addresses": [ 00:15:19.631 { 00:15:19.631 "trtype": "TCP", 00:15:19.631 "adrfam": "IPv4", 00:15:19.631 "traddr": "10.0.0.2", 00:15:19.631 "trsvcid": "4420" 00:15:19.631 } 00:15:19.631 ], 00:15:19.631 "allow_any_host": true, 00:15:19.631 "hosts": [], 00:15:19.631 "serial_number": "SPDK00000000000002", 00:15:19.631 "model_number": "SPDK bdev Controller", 00:15:19.631 "max_namespaces": 32, 00:15:19.631 "min_cntlid": 1, 00:15:19.631 "max_cntlid": 65519, 00:15:19.631 "namespaces": [ 00:15:19.631 { 00:15:19.631 "nsid": 1, 00:15:19.631 "bdev_name": "Null2", 00:15:19.631 "name": "Null2", 00:15:19.631 "nguid": "F8CCE8EFFA3648AA8DA2C76E751CFC22", 00:15:19.631 "uuid": "f8cce8ef-fa36-48aa-8da2-c76e751cfc22" 00:15:19.632 } 00:15:19.632 ] 00:15:19.632 }, 00:15:19.632 { 00:15:19.632 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:19.632 "subtype": "NVMe", 00:15:19.632 "listen_addresses": [ 00:15:19.632 { 00:15:19.893 "trtype": "TCP", 00:15:19.893 "adrfam": "IPv4", 00:15:19.893 "traddr": "10.0.0.2", 00:15:19.893 "trsvcid": "4420" 00:15:19.893 } 00:15:19.893 ], 00:15:19.893 "allow_any_host": true, 00:15:19.893 "hosts": [], 00:15:19.893 "serial_number": "SPDK00000000000003", 00:15:19.893 "model_number": "SPDK bdev Controller", 00:15:19.893 "max_namespaces": 32, 00:15:19.893 "min_cntlid": 1, 00:15:19.893 "max_cntlid": 65519, 00:15:19.893 "namespaces": [ 00:15:19.893 { 00:15:19.893 "nsid": 1, 00:15:19.893 "bdev_name": "Null3", 00:15:19.893 "name": "Null3", 00:15:19.893 "nguid": "74083F9ADF544996868FFD73736D35C7", 00:15:19.893 "uuid": "74083f9a-df54-4996-868f-fd73736d35c7" 00:15:19.893 } 00:15:19.893 ] 00:15:19.893 }, 00:15:19.893 { 00:15:19.893 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:19.893 "subtype": "NVMe", 00:15:19.893 "listen_addresses": [ 00:15:19.893 { 00:15:19.893 "trtype": "TCP", 00:15:19.893 "adrfam": "IPv4", 00:15:19.893 "traddr": "10.0.0.2", 00:15:19.893 "trsvcid": "4420" 00:15:19.893 } 00:15:19.893 ], 00:15:19.893 "allow_any_host": true, 00:15:19.893 "hosts": [], 00:15:19.893 "serial_number": "SPDK00000000000004", 00:15:19.893 "model_number": "SPDK bdev Controller", 00:15:19.893 "max_namespaces": 32, 00:15:19.893 "min_cntlid": 1, 00:15:19.893 "max_cntlid": 65519, 00:15:19.893 "namespaces": [ 00:15:19.893 { 00:15:19.893 "nsid": 1, 00:15:19.893 "bdev_name": "Null4", 00:15:19.893 "name": "Null4", 00:15:19.893 "nguid": "C6F398A70314424A9ABA1C0146B4BDF0", 00:15:19.893 "uuid": "c6f398a7-0314-424a-9aba-1c0146b4bdf0" 00:15:19.893 } 00:15:19.893 ] 00:15:19.893 } 00:15:19.893 ] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.893 rmmod nvme_tcp 00:15:19.893 rmmod nvme_fabrics 00:15:19.893 rmmod nvme_keyring 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:15:19.893 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3529968 ']' 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3529968 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3529968 ']' 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3529968 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3529968 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3529968' 00:15:19.894 killing process with pid 3529968 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3529968 00:15:19.894 20:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3529968 00:15:20.834 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.834 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:20.834 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:20.834 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.834 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.835 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.835 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.835 20:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.381 20:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:23.381 00:15:23.381 real 0m11.818s 00:15:23.381 user 0m9.218s 00:15:23.381 sys 0m5.798s 00:15:23.381 20:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:23.381 20:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.381 ************************************ 00:15:23.381 END TEST nvmf_target_discovery 00:15:23.381 ************************************ 00:15:23.381 20:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:23.381 20:22:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:23.381 20:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:23.381 20:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:23.381 20:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.381 ************************************ 00:15:23.381 START TEST nvmf_referrals 00:15:23.381 ************************************ 00:15:23.381 20:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:23.381 * Looking for test storage... 00:15:23.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.381 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:15:23.382 20:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.970 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.970 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.970 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.970 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.970 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.970 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.970 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.970 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.970 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.970 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:15:29.970 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:29.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:29.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:29.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:29.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.971 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:30.232 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:30.232 20:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.232 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.232 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.232 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.232 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:30.232 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:30.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:15:30.493 00:15:30.493 --- 10.0.0.2 ping statistics --- 00:15:30.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.493 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:15:30.493 00:15:30.493 --- 10.0.0.1 ping statistics --- 00:15:30.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.493 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3534620 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3534620 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3534620 ']' 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.493 20:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:30.493 [2024-07-22 20:22:42.447600] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:30.493 [2024-07-22 20:22:42.447726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.792 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.792 [2024-07-22 20:22:42.581384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.792 [2024-07-22 20:22:42.765166] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.792 [2024-07-22 20:22:42.765214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.792 [2024-07-22 20:22:42.765228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.792 [2024-07-22 20:22:42.765237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.792 [2024-07-22 20:22:42.765247] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.792 [2024-07-22 20:22:42.765453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.792 [2024-07-22 20:22:42.765535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.792 [2024-07-22 20:22:42.765649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.792 [2024-07-22 20:22:42.765676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.365 [2024-07-22 20:22:43.245872] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.365 [2024-07-22 20:22:43.262061] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.365 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:31.625 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:31.885 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:31.886 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:31.886 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:32.146 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:32.146 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:32.146 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:32.146 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:32.146 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:32.146 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:32.146 20:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:32.146 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:32.146 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:32.146 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:32.146 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:32.146 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:32.146 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:32.406 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:32.666 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:32.666 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:32.666 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:32.666 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:32.666 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:32.666 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:32.667 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:32.667 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:32.667 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:32.667 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:32.667 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:32.667 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:32.667 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:32.667 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:32.667 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:32.667 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.667 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:32.928 rmmod nvme_tcp 00:15:32.928 rmmod nvme_fabrics 00:15:32.928 rmmod nvme_keyring 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3534620 ']' 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3534620 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3534620 ']' 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3534620 00:15:32.928 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:15:33.189 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.189 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3534620 00:15:33.189 20:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:33.189 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:33.189 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3534620' 00:15:33.189 killing process with pid 3534620 00:15:33.189 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3534620 00:15:33.189 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3534620 00:15:34.131 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.131 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.131 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.131 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.131 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.131 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.131 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.131 20:22:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.044 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.044 00:15:36.044 real 0m13.002s 00:15:36.044 user 0m13.934s 00:15:36.044 sys 0m6.241s 00:15:36.044 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:36.044 20:22:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:36.044 ************************************ 00:15:36.044 END TEST nvmf_referrals 00:15:36.044 ************************************ 00:15:36.044 20:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:36.044 20:22:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:36.044 20:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:36.044 20:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.044 20:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:36.044 ************************************ 00:15:36.044 START TEST nvmf_connect_disconnect 00:15:36.044 ************************************ 00:15:36.044 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:36.305 * Looking for test storage... 00:15:36.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.305 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.306 20:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.449 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:44.449 20:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:44.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:44.449 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:44.449 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:44.449 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:44.449 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:44.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:15:44.450 00:15:44.450 --- 10.0.0.2 ping statistics --- 00:15:44.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.450 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:44.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:15:44.450 00:15:44.450 --- 10.0.0.1 ping statistics --- 00:15:44.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.450 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3539421 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3539421 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3539421 ']' 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.450 20:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.450 [2024-07-22 20:22:55.447872] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:44.450 [2024-07-22 20:22:55.448000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.450 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.450 [2024-07-22 20:22:55.582950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.450 [2024-07-22 20:22:55.766402] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.450 [2024-07-22 20:22:55.766443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.450 [2024-07-22 20:22:55.766458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.450 [2024-07-22 20:22:55.766468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.450 [2024-07-22 20:22:55.766478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.450 [2024-07-22 20:22:55.766649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.450 [2024-07-22 20:22:55.766733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.450 [2024-07-22 20:22:55.766845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.450 [2024-07-22 20:22:55.766873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.450 [2024-07-22 20:22:56.243882] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.450 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.451 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.451 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.451 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.451 [2024-07-22 20:22:56.340431] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.451 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.451 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:15:44.451 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:15:44.451 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:15:44.451 20:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:46.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:07.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:25.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:37.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:39.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:42.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:44.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:49.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:51.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:56.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:58.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:01.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:03.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:08.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:09.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:12.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:15.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:17.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:19.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:22.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:26.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:28.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:31.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:36.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:38.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:38.026 rmmod nvme_tcp 00:19:38.026 rmmod nvme_fabrics 00:19:38.026 rmmod nvme_keyring 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3539421 ']' 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3539421 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3539421 ']' 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3539421 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.026 20:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3539421 00:19:38.026 20:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:38.026 20:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:38.026 20:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3539421' 00:19:38.026 killing process with pid 3539421 00:19:38.026 20:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3539421 00:19:38.026 20:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3539421 00:19:39.413 20:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.413 20:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.413 20:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.413 20:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.413 20:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.413 20:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.413 20:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.413 20:26:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:41.328 00:19:41.328 real 4m5.064s 00:19:41.328 user 15m32.087s 00:19:41.328 sys 0m23.433s 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:19:41.328 ************************************ 00:19:41.328 END TEST nvmf_connect_disconnect 00:19:41.328 ************************************ 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:41.328 ************************************ 00:19:41.328 START TEST nvmf_multitarget 00:19:41.328 ************************************ 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:19:41.328 * Looking for test storage... 00:19:41.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.328 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:19:41.329 20:26:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:49.542 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:49.542 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:49.542 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:49.542 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:49.542 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:49.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:19:49.543 00:19:49.543 --- 10.0.0.2 ping statistics --- 00:19:49.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.543 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:19:49.543 00:19:49.543 --- 10.0.0.1 ping statistics --- 00:19:49.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.543 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3590938 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3590938 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3590938 ']' 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.543 20:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:49.543 [2024-07-22 20:27:00.497047] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:49.543 [2024-07-22 20:27:00.497172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.543 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.543 [2024-07-22 20:27:00.636519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.543 [2024-07-22 20:27:00.820194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.543 [2024-07-22 20:27:00.820248] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.543 [2024-07-22 20:27:00.820261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.543 [2024-07-22 20:27:00.820271] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.543 [2024-07-22 20:27:00.820282] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.543 [2024-07-22 20:27:00.820390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.543 [2024-07-22 20:27:00.820474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.543 [2024-07-22 20:27:00.820587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.543 [2024-07-22 20:27:00.820614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:19:49.543 "nvmf_tgt_1" 00:19:49.543 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:19:49.804 "nvmf_tgt_2" 00:19:49.804 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:49.804 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:19:49.804 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:19:49.804 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:19:49.804 true 00:19:49.804 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:19:50.065 true 00:19:50.065 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:50.065 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:19:50.065 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:19:50.065 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:50.065 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:19:50.065 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:50.065 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:19:50.065 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:50.065 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:19:50.065 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:50.065 20:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:50.065 rmmod nvme_tcp 00:19:50.065 rmmod nvme_fabrics 00:19:50.065 rmmod nvme_keyring 00:19:50.065 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.065 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:19:50.065 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:19:50.065 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3590938 ']' 00:19:50.065 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3590938 00:19:50.065 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3590938 ']' 00:19:50.065 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3590938 00:19:50.065 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:19:50.065 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.065 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3590938 00:19:50.326 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:50.326 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:50.326 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3590938' 00:19:50.326 killing process with pid 3590938 00:19:50.326 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3590938 00:19:50.326 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3590938 00:19:51.268 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.268 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.268 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.268 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.268 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.268 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.268 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.268 20:27:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.181 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:53.181 00:19:53.181 real 0m11.901s 00:19:53.181 user 0m10.773s 00:19:53.181 sys 0m5.783s 00:19:53.181 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:53.181 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:53.181 ************************************ 00:19:53.181 END TEST nvmf_multitarget 00:19:53.181 ************************************ 00:19:53.181 20:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:19:53.181 20:27:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:19:53.181 20:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:53.181 20:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.181 20:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:53.181 ************************************ 00:19:53.181 START TEST nvmf_rpc 00:19:53.181 ************************************ 00:19:53.181 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:19:53.443 * Looking for test storage... 00:19:53.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.443 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.443 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:19:53.443 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.443 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.443 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.443 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:19:53.444 20:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:01.589 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:01.589 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:01.589 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:01.589 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.589 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:01.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:20:01.590 00:20:01.590 --- 10.0.0.2 ping statistics --- 00:20:01.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.590 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:20:01.590 00:20:01.590 --- 10.0.0.1 ping statistics --- 00:20:01.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.590 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3596039 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3596039 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3596039 ']' 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.590 20:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.590 [2024-07-22 20:27:12.552508] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:01.590 [2024-07-22 20:27:12.552632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.590 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.590 [2024-07-22 20:27:12.686279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.590 [2024-07-22 20:27:12.869860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.590 [2024-07-22 20:27:12.869906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.590 [2024-07-22 20:27:12.869919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.590 [2024-07-22 20:27:12.869929] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.590 [2024-07-22 20:27:12.869940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.590 [2024-07-22 20:27:12.870137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.590 [2024-07-22 20:27:12.873225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.590 [2024-07-22 20:27:12.873306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.590 [2024-07-22 20:27:12.873330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:20:01.590 "tick_rate": 2400000000, 00:20:01.590 "poll_groups": [ 00:20:01.590 { 00:20:01.590 "name": "nvmf_tgt_poll_group_000", 00:20:01.590 "admin_qpairs": 0, 00:20:01.590 "io_qpairs": 0, 00:20:01.590 "current_admin_qpairs": 0, 00:20:01.590 "current_io_qpairs": 0, 00:20:01.590 "pending_bdev_io": 0, 00:20:01.590 "completed_nvme_io": 0, 00:20:01.590 "transports": [] 00:20:01.590 }, 00:20:01.590 { 00:20:01.590 "name": "nvmf_tgt_poll_group_001", 00:20:01.590 "admin_qpairs": 0, 00:20:01.590 "io_qpairs": 0, 00:20:01.590 "current_admin_qpairs": 0, 00:20:01.590 "current_io_qpairs": 0, 00:20:01.590 "pending_bdev_io": 0, 00:20:01.590 "completed_nvme_io": 0, 00:20:01.590 "transports": [] 00:20:01.590 }, 00:20:01.590 { 00:20:01.590 "name": "nvmf_tgt_poll_group_002", 00:20:01.590 "admin_qpairs": 0, 00:20:01.590 "io_qpairs": 0, 00:20:01.590 "current_admin_qpairs": 0, 00:20:01.590 "current_io_qpairs": 0, 00:20:01.590 "pending_bdev_io": 0, 00:20:01.590 "completed_nvme_io": 0, 00:20:01.590 "transports": [] 00:20:01.590 }, 00:20:01.590 { 00:20:01.590 "name": "nvmf_tgt_poll_group_003", 00:20:01.590 "admin_qpairs": 0, 00:20:01.590 "io_qpairs": 0, 00:20:01.590 "current_admin_qpairs": 0, 00:20:01.590 "current_io_qpairs": 0, 00:20:01.590 "pending_bdev_io": 0, 00:20:01.590 "completed_nvme_io": 0, 00:20:01.590 "transports": [] 00:20:01.590 } 00:20:01.590 ] 00:20:01.590 }' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.590 [2024-07-22 20:27:13.446174] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:20:01.590 "tick_rate": 2400000000, 00:20:01.590 "poll_groups": [ 00:20:01.590 { 00:20:01.590 "name": "nvmf_tgt_poll_group_000", 00:20:01.590 "admin_qpairs": 0, 00:20:01.590 "io_qpairs": 0, 00:20:01.590 "current_admin_qpairs": 0, 00:20:01.590 "current_io_qpairs": 0, 00:20:01.590 "pending_bdev_io": 0, 00:20:01.590 "completed_nvme_io": 0, 00:20:01.590 "transports": [ 00:20:01.590 { 00:20:01.590 "trtype": "TCP" 00:20:01.590 } 00:20:01.590 ] 00:20:01.590 }, 00:20:01.590 { 00:20:01.590 "name": "nvmf_tgt_poll_group_001", 00:20:01.590 "admin_qpairs": 0, 00:20:01.590 "io_qpairs": 0, 00:20:01.590 "current_admin_qpairs": 0, 00:20:01.590 "current_io_qpairs": 0, 00:20:01.590 "pending_bdev_io": 0, 00:20:01.590 "completed_nvme_io": 0, 00:20:01.590 "transports": [ 00:20:01.590 { 00:20:01.590 "trtype": "TCP" 00:20:01.590 } 00:20:01.590 ] 00:20:01.590 }, 00:20:01.590 { 00:20:01.590 "name": "nvmf_tgt_poll_group_002", 00:20:01.590 "admin_qpairs": 0, 00:20:01.590 "io_qpairs": 0, 00:20:01.590 "current_admin_qpairs": 0, 00:20:01.590 "current_io_qpairs": 0, 00:20:01.590 "pending_bdev_io": 0, 00:20:01.590 "completed_nvme_io": 0, 00:20:01.590 "transports": [ 00:20:01.590 { 00:20:01.590 "trtype": "TCP" 00:20:01.590 } 00:20:01.590 ] 00:20:01.590 }, 00:20:01.590 { 00:20:01.590 "name": "nvmf_tgt_poll_group_003", 00:20:01.590 "admin_qpairs": 0, 00:20:01.590 "io_qpairs": 0, 00:20:01.590 "current_admin_qpairs": 0, 00:20:01.590 "current_io_qpairs": 0, 00:20:01.590 "pending_bdev_io": 0, 00:20:01.590 "completed_nvme_io": 0, 00:20:01.590 "transports": [ 00:20:01.590 { 00:20:01.590 "trtype": "TCP" 00:20:01.590 } 00:20:01.590 ] 00:20:01.590 } 00:20:01.590 ] 00:20:01.590 }' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.590 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.850 Malloc1 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.850 [2024-07-22 20:27:13.671072] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:20:01.850 [2024-07-22 20:27:13.698185] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:20:01.850 Failed to write to /dev/nvme-fabrics: Input/output error 00:20:01.850 could not add new controller: failed to write to nvme-fabrics device 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.850 20:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:03.761 20:27:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:20:03.761 20:27:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:03.761 20:27:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:03.761 20:27:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:03.761 20:27:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:05.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.675 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:05.676 [2024-07-22 20:27:17.604135] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:20:05.676 Failed to write to /dev/nvme-fabrics: Input/output error 00:20:05.676 could not add new controller: failed to write to nvme-fabrics device 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.676 20:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:07.589 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:20:07.589 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:07.589 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:07.589 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:07.589 20:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:09.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:09.501 [2024-07-22 20:27:21.499182] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.501 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:09.761 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.761 20:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:11.144 20:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:11.144 20:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:11.144 20:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:11.144 20:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:11.144 20:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:13.060 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:13.061 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:13.061 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:13.061 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:13.061 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:13.061 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:13.061 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:13.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.322 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.322 [2024-07-22 20:27:25.340418] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.583 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.583 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:13.583 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.583 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.583 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.583 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:13.583 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.583 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.583 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.583 20:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:14.967 20:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:14.967 20:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:14.967 20:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:14.967 20:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:14.967 20:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:16.917 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:16.917 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:16.917 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:16.917 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:16.917 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:16.917 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:16.917 20:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:17.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:17.203 [2024-07-22 20:27:29.199269] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.203 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:17.464 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.464 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:17.464 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.464 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:17.464 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.464 20:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:18.848 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:18.848 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:18.848 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:18.848 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:18.848 20:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:21.393 20:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:21.393 20:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:21.393 20:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:21.393 20:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:21.393 20:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:21.393 20:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:21.393 20:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:21.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.393 [2024-07-22 20:27:33.104588] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.393 20:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:22.777 20:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:22.777 20:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:22.777 20:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:22.777 20:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:22.777 20:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:24.691 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:24.691 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:24.691 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:24.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.952 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.214 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.214 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.214 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.214 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.214 [2024-07-22 20:27:36.991021] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.214 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.214 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:25.214 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.214 20:27:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.214 20:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.214 20:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:25.214 20:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.214 20:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.214 20:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.214 20:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:26.601 20:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:26.601 20:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:26.601 20:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:26.601 20:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:26.601 20:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:28.664 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:28.664 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:28.664 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:28.664 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:28.664 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:28.664 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:28.664 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:28.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.925 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.926 [2024-07-22 20:27:40.851582] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.926 [2024-07-22 20:27:40.915746] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.926 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 [2024-07-22 20:27:40.979947] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 [2024-07-22 20:27:41.040113] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 [2024-07-22 20:27:41.100351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.188 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:20:29.189 "tick_rate": 2400000000, 00:20:29.189 "poll_groups": [ 00:20:29.189 { 00:20:29.189 "name": "nvmf_tgt_poll_group_000", 00:20:29.189 "admin_qpairs": 0, 00:20:29.189 "io_qpairs": 224, 00:20:29.189 "current_admin_qpairs": 0, 00:20:29.189 "current_io_qpairs": 0, 00:20:29.189 "pending_bdev_io": 0, 00:20:29.189 "completed_nvme_io": 333, 00:20:29.189 "transports": [ 00:20:29.189 { 00:20:29.189 "trtype": "TCP" 00:20:29.189 } 00:20:29.189 ] 00:20:29.189 }, 00:20:29.189 { 00:20:29.189 "name": "nvmf_tgt_poll_group_001", 00:20:29.189 "admin_qpairs": 1, 00:20:29.189 "io_qpairs": 223, 00:20:29.189 "current_admin_qpairs": 0, 00:20:29.189 "current_io_qpairs": 0, 00:20:29.189 "pending_bdev_io": 0, 00:20:29.189 "completed_nvme_io": 386, 00:20:29.189 "transports": [ 00:20:29.189 { 00:20:29.189 "trtype": "TCP" 00:20:29.189 } 00:20:29.189 ] 00:20:29.189 }, 00:20:29.189 { 00:20:29.189 "name": "nvmf_tgt_poll_group_002", 00:20:29.189 "admin_qpairs": 6, 00:20:29.189 "io_qpairs": 218, 00:20:29.189 "current_admin_qpairs": 0, 00:20:29.189 "current_io_qpairs": 0, 00:20:29.189 "pending_bdev_io": 0, 00:20:29.189 "completed_nvme_io": 218, 00:20:29.189 "transports": [ 00:20:29.189 { 00:20:29.189 "trtype": "TCP" 00:20:29.189 } 00:20:29.189 ] 00:20:29.189 }, 00:20:29.189 { 00:20:29.189 "name": "nvmf_tgt_poll_group_003", 00:20:29.189 "admin_qpairs": 0, 00:20:29.189 "io_qpairs": 224, 00:20:29.189 "current_admin_qpairs": 0, 00:20:29.189 "current_io_qpairs": 0, 00:20:29.189 "pending_bdev_io": 0, 00:20:29.189 "completed_nvme_io": 302, 00:20:29.189 "transports": [ 00:20:29.189 { 00:20:29.189 "trtype": "TCP" 00:20:29.189 } 00:20:29.189 ] 00:20:29.189 } 00:20:29.189 ] 00:20:29.189 }' 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:20:29.189 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:29.450 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:20:29.450 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:20:29.450 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:20:29.450 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:29.451 rmmod nvme_tcp 00:20:29.451 rmmod nvme_fabrics 00:20:29.451 rmmod nvme_keyring 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3596039 ']' 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3596039 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3596039 ']' 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3596039 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3596039 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3596039' 00:20:29.451 killing process with pid 3596039 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3596039 00:20:29.451 20:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3596039 00:20:30.392 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:30.393 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:30.393 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:30.393 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:30.393 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:30.393 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.393 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.393 20:27:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:32.940 00:20:32.940 real 0m39.261s 00:20:32.940 user 1m58.628s 00:20:32.940 sys 0m7.484s 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:32.940 ************************************ 00:20:32.940 END TEST nvmf_rpc 00:20:32.940 ************************************ 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:32.940 ************************************ 00:20:32.940 START TEST nvmf_invalid 00:20:32.940 ************************************ 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:20:32.940 * Looking for test storage... 00:20:32.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:20:32.940 20:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:39.532 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:39.532 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:39.532 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:39.533 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:39.533 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:39.533 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:39.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:20:39.795 00:20:39.795 --- 10.0.0.2 ping statistics --- 00:20:39.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.795 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:39.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:20:39.795 00:20:39.795 --- 10.0.0.1 ping statistics --- 00:20:39.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.795 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3606102 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3606102 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3606102 ']' 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.795 20:27:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:39.795 [2024-07-22 20:27:51.779595] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:39.795 [2024-07-22 20:27:51.779720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.055 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.055 [2024-07-22 20:27:51.915501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.315 [2024-07-22 20:27:52.099109] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.315 [2024-07-22 20:27:52.099152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.315 [2024-07-22 20:27:52.099165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.315 [2024-07-22 20:27:52.099174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.315 [2024-07-22 20:27:52.099184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.315 [2024-07-22 20:27:52.099380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.315 [2024-07-22 20:27:52.099458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.315 [2024-07-22 20:27:52.099572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.315 [2024-07-22 20:27:52.099600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.575 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.575 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:20:40.575 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.575 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.575 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:40.575 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.575 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:40.575 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1675 00:20:40.835 [2024-07-22 20:27:52.704259] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:20:40.835 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:20:40.835 { 00:20:40.835 "nqn": "nqn.2016-06.io.spdk:cnode1675", 00:20:40.835 "tgt_name": "foobar", 00:20:40.835 "method": "nvmf_create_subsystem", 00:20:40.835 "req_id": 1 00:20:40.835 } 00:20:40.835 Got JSON-RPC error response 00:20:40.835 response: 00:20:40.835 { 00:20:40.835 "code": -32603, 00:20:40.835 "message": "Unable to find target foobar" 00:20:40.835 }' 00:20:40.835 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:20:40.835 { 00:20:40.835 "nqn": "nqn.2016-06.io.spdk:cnode1675", 00:20:40.835 "tgt_name": "foobar", 00:20:40.835 "method": "nvmf_create_subsystem", 00:20:40.835 "req_id": 1 00:20:40.835 } 00:20:40.835 Got JSON-RPC error response 00:20:40.835 response: 00:20:40.835 { 00:20:40.835 "code": -32603, 00:20:40.835 "message": "Unable to find target foobar" 00:20:40.835 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:20:40.835 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:20:40.835 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11061 00:20:41.096 [2024-07-22 20:27:52.884896] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11061: invalid serial number 'SPDKISFASTANDAWESOME' 00:20:41.096 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:20:41.096 { 00:20:41.096 "nqn": "nqn.2016-06.io.spdk:cnode11061", 00:20:41.096 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:41.096 "method": "nvmf_create_subsystem", 00:20:41.096 "req_id": 1 00:20:41.096 } 00:20:41.096 Got JSON-RPC error response 00:20:41.096 response: 00:20:41.096 { 00:20:41.096 "code": -32602, 00:20:41.096 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:41.096 }' 00:20:41.096 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:20:41.096 { 00:20:41.096 "nqn": "nqn.2016-06.io.spdk:cnode11061", 00:20:41.096 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:41.096 "method": "nvmf_create_subsystem", 00:20:41.096 "req_id": 1 00:20:41.096 } 00:20:41.096 Got JSON-RPC error response 00:20:41.096 response: 00:20:41.096 { 00:20:41.096 "code": -32602, 00:20:41.096 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:41.096 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:41.096 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:20:41.096 20:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode21818 00:20:41.096 [2024-07-22 20:27:53.061469] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21818: invalid model number 'SPDK_Controller' 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:20:41.096 { 00:20:41.096 "nqn": "nqn.2016-06.io.spdk:cnode21818", 00:20:41.096 "model_number": "SPDK_Controller\u001f", 00:20:41.096 "method": "nvmf_create_subsystem", 00:20:41.096 "req_id": 1 00:20:41.096 } 00:20:41.096 Got JSON-RPC error response 00:20:41.096 response: 00:20:41.096 { 00:20:41.096 "code": -32602, 00:20:41.096 "message": "Invalid MN SPDK_Controller\u001f" 00:20:41.096 }' 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:20:41.096 { 00:20:41.096 "nqn": "nqn.2016-06.io.spdk:cnode21818", 00:20:41.096 "model_number": "SPDK_Controller\u001f", 00:20:41.096 "method": "nvmf_create_subsystem", 00:20:41.096 "req_id": 1 00:20:41.096 } 00:20:41.096 Got JSON-RPC error response 00:20:41.096 response: 00:20:41.096 { 00:20:41.096 "code": -32602, 00:20:41.096 "message": "Invalid MN SPDK_Controller\u001f" 00:20:41.096 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.096 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:20:41.357 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:20:41.358 20:27:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '9N/m|&u:06/vPi~ /dev/null' 00:20:44.724 20:27:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:46.639 00:20:46.639 real 0m13.983s 00:20:46.639 user 0m20.614s 00:20:46.639 sys 0m6.329s 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:46.639 ************************************ 00:20:46.639 END TEST nvmf_invalid 00:20:46.639 ************************************ 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.639 ************************************ 00:20:46.639 START TEST nvmf_connect_stress 00:20:46.639 ************************************ 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:46.639 * Looking for test storage... 00:20:46.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.639 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.900 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.900 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.900 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.900 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.900 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.900 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.900 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.900 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.900 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.900 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:20:46.901 20:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:53.493 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:53.493 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:53.493 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:53.493 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.493 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.494 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.494 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.494 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.494 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.494 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.754 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.754 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.754 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.754 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.754 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.754 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:54.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:20:54.015 00:20:54.015 --- 10.0.0.2 ping statistics --- 00:20:54.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.015 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:54.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:20:54.015 00:20:54.015 --- 10.0.0.1 ping statistics --- 00:20:54.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.015 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3611274 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3611274 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3611274 ']' 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.015 20:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:54.015 [2024-07-22 20:28:05.931499] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:54.015 [2024-07-22 20:28:05.931596] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.015 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.276 [2024-07-22 20:28:06.069688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:54.276 [2024-07-22 20:28:06.273926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.276 [2024-07-22 20:28:06.273996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.276 [2024-07-22 20:28:06.274012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.276 [2024-07-22 20:28:06.274023] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.276 [2024-07-22 20:28:06.274035] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.276 [2024-07-22 20:28:06.274237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.277 [2024-07-22 20:28:06.274387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:54.277 [2024-07-22 20:28:06.274465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:54.849 [2024-07-22 20:28:06.761529] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:54.849 [2024-07-22 20:28:06.807919] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:54.849 NULL1 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3611387 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:54.849 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:54.850 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:54.850 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:54.850 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:54.850 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:54.850 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:54.850 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:54.850 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.111 20:28:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:55.372 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.373 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:55.373 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:55.373 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.373 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:55.633 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.633 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:55.633 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:55.633 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.633 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:55.894 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.894 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:55.894 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:55.894 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.894 20:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:56.466 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.466 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:56.466 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:56.466 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.466 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:56.727 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.727 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:56.727 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:56.727 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.727 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:56.988 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.988 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:56.988 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:56.988 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.988 20:28:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:57.249 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.249 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:57.249 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:57.249 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.249 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:57.511 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.772 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:57.772 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:57.772 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.772 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:58.033 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.033 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:58.033 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:58.033 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.033 20:28:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:58.294 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.294 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:58.294 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:58.294 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.294 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:58.555 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.555 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:58.555 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:58.555 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.555 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:58.816 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.816 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:58.816 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:58.816 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.816 20:28:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:59.388 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.388 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:59.388 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:59.388 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.388 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:59.648 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.648 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:59.648 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:59.648 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.648 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:59.909 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.909 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:20:59.909 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:59.909 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.909 20:28:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:00.170 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.170 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:00.170 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:00.170 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.170 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:00.741 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.741 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:00.741 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:00.741 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.742 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:01.002 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.002 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:01.002 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:01.002 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.002 20:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:01.264 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.264 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:01.264 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:01.264 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.264 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:01.524 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.524 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:01.524 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:01.524 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.524 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:01.785 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.785 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:01.785 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:01.785 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.785 20:28:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:02.357 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.357 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:02.357 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:02.357 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.357 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:02.618 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.618 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:02.618 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:02.618 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.618 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:02.878 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.878 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:02.878 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:02.878 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.878 20:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:03.139 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.139 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:03.139 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:03.139 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.139 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:03.400 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.400 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:03.400 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:03.400 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.400 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:03.972 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.972 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:03.972 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:03.972 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.972 20:28:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:04.234 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.234 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:04.234 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:04.234 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.234 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:04.495 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.495 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:04.495 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:04.495 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.495 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:04.756 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.756 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:04.756 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:04.756 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.756 20:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:05.017 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.017 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:05.017 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:05.017 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.017 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:05.277 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3611387 00:21:05.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3611387) - No such process 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3611387 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.539 rmmod nvme_tcp 00:21:05.539 rmmod nvme_fabrics 00:21:05.539 rmmod nvme_keyring 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3611274 ']' 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3611274 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3611274 ']' 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3611274 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3611274 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3611274' 00:21:05.539 killing process with pid 3611274 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3611274 00:21:05.539 20:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3611274 00:21:06.111 20:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:06.111 20:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:06.111 20:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:06.111 20:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.111 20:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:06.111 20:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.111 20:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.111 20:28:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.661 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:08.661 00:21:08.661 real 0m21.657s 00:21:08.661 user 0m44.249s 00:21:08.661 sys 0m8.491s 00:21:08.661 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:08.661 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:08.661 ************************************ 00:21:08.661 END TEST nvmf_connect_stress 00:21:08.661 ************************************ 00:21:08.661 20:28:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:08.661 20:28:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:08.662 ************************************ 00:21:08.662 START TEST nvmf_fused_ordering 00:21:08.662 ************************************ 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:21:08.662 * Looking for test storage... 00:21:08.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.662 20:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:15.299 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:15.300 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:15.300 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:15.300 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:15.300 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.300 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.560 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.560 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.560 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:15.561 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.561 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.561 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.561 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:15.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.810 ms 00:21:15.561 00:21:15.561 --- 10.0.0.2 ping statistics --- 00:21:15.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.561 rtt min/avg/max/mdev = 0.810/0.810/0.810/0.000 ms 00:21:15.561 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:21:15.821 00:21:15.821 --- 10.0.0.1 ping statistics --- 00:21:15.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.821 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3617662 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3617662 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3617662 ']' 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.821 20:28:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:15.821 [2024-07-22 20:28:27.738277] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:15.821 [2024-07-22 20:28:27.738404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.821 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.082 [2024-07-22 20:28:27.889462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.343 [2024-07-22 20:28:28.117561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.344 [2024-07-22 20:28:28.117630] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.344 [2024-07-22 20:28:28.117645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.344 [2024-07-22 20:28:28.117656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.344 [2024-07-22 20:28:28.117668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.344 [2024-07-22 20:28:28.117703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:16.605 [2024-07-22 20:28:28.532342] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.605 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:16.606 [2024-07-22 20:28:28.552600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:16.606 NULL1 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.606 20:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:16.606 [2024-07-22 20:28:28.609904] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:16.606 [2024-07-22 20:28:28.609971] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618006 ] 00:21:16.867 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.435 Attached to nqn.2016-06.io.spdk:cnode1 00:21:17.435 Namespace ID: 1 size: 1GB 00:21:17.435 fused_ordering(0) 00:21:17.435 fused_ordering(1) 00:21:17.435 fused_ordering(2) 00:21:17.435 fused_ordering(3) 00:21:17.435 fused_ordering(4) 00:21:17.435 fused_ordering(5) 00:21:17.435 fused_ordering(6) 00:21:17.435 fused_ordering(7) 00:21:17.435 fused_ordering(8) 00:21:17.435 fused_ordering(9) 00:21:17.435 fused_ordering(10) 00:21:17.435 fused_ordering(11) 00:21:17.435 fused_ordering(12) 00:21:17.435 fused_ordering(13) 00:21:17.435 fused_ordering(14) 00:21:17.436 fused_ordering(15) 00:21:17.436 fused_ordering(16) 00:21:17.436 fused_ordering(17) 00:21:17.436 fused_ordering(18) 00:21:17.436 fused_ordering(19) 00:21:17.436 fused_ordering(20) 00:21:17.436 fused_ordering(21) 00:21:17.436 fused_ordering(22) 00:21:17.436 fused_ordering(23) 00:21:17.436 fused_ordering(24) 00:21:17.436 fused_ordering(25) 00:21:17.436 fused_ordering(26) 00:21:17.436 fused_ordering(27) 00:21:17.436 fused_ordering(28) 00:21:17.436 fused_ordering(29) 00:21:17.436 fused_ordering(30) 00:21:17.436 fused_ordering(31) 00:21:17.436 fused_ordering(32) 00:21:17.436 fused_ordering(33) 00:21:17.436 fused_ordering(34) 00:21:17.436 fused_ordering(35) 00:21:17.436 fused_ordering(36) 00:21:17.436 fused_ordering(37) 00:21:17.436 fused_ordering(38) 00:21:17.436 fused_ordering(39) 00:21:17.436 fused_ordering(40) 00:21:17.436 fused_ordering(41) 00:21:17.436 fused_ordering(42) 00:21:17.436 fused_ordering(43) 00:21:17.436 fused_ordering(44) 00:21:17.436 fused_ordering(45) 00:21:17.436 fused_ordering(46) 00:21:17.436 fused_ordering(47) 00:21:17.436 fused_ordering(48) 00:21:17.436 fused_ordering(49) 00:21:17.436 fused_ordering(50) 00:21:17.436 fused_ordering(51) 00:21:17.436 fused_ordering(52) 00:21:17.436 fused_ordering(53) 00:21:17.436 fused_ordering(54) 00:21:17.436 fused_ordering(55) 00:21:17.436 fused_ordering(56) 00:21:17.436 fused_ordering(57) 00:21:17.436 fused_ordering(58) 00:21:17.436 fused_ordering(59) 00:21:17.436 fused_ordering(60) 00:21:17.436 fused_ordering(61) 00:21:17.436 fused_ordering(62) 00:21:17.436 fused_ordering(63) 00:21:17.436 fused_ordering(64) 00:21:17.436 fused_ordering(65) 00:21:17.436 fused_ordering(66) 00:21:17.436 fused_ordering(67) 00:21:17.436 fused_ordering(68) 00:21:17.436 fused_ordering(69) 00:21:17.436 fused_ordering(70) 00:21:17.436 fused_ordering(71) 00:21:17.436 fused_ordering(72) 00:21:17.436 fused_ordering(73) 00:21:17.436 fused_ordering(74) 00:21:17.436 fused_ordering(75) 00:21:17.436 fused_ordering(76) 00:21:17.436 fused_ordering(77) 00:21:17.436 fused_ordering(78) 00:21:17.436 fused_ordering(79) 00:21:17.436 fused_ordering(80) 00:21:17.436 fused_ordering(81) 00:21:17.436 fused_ordering(82) 00:21:17.436 fused_ordering(83) 00:21:17.436 fused_ordering(84) 00:21:17.436 fused_ordering(85) 00:21:17.436 fused_ordering(86) 00:21:17.436 fused_ordering(87) 00:21:17.436 fused_ordering(88) 00:21:17.436 fused_ordering(89) 00:21:17.436 fused_ordering(90) 00:21:17.436 fused_ordering(91) 00:21:17.436 fused_ordering(92) 00:21:17.436 fused_ordering(93) 00:21:17.436 fused_ordering(94) 00:21:17.436 fused_ordering(95) 00:21:17.436 fused_ordering(96) 00:21:17.436 fused_ordering(97) 00:21:17.436 fused_ordering(98) 00:21:17.436 fused_ordering(99) 00:21:17.436 fused_ordering(100) 00:21:17.436 fused_ordering(101) 00:21:17.436 fused_ordering(102) 00:21:17.436 fused_ordering(103) 00:21:17.436 fused_ordering(104) 00:21:17.436 fused_ordering(105) 00:21:17.436 fused_ordering(106) 00:21:17.436 fused_ordering(107) 00:21:17.436 fused_ordering(108) 00:21:17.436 fused_ordering(109) 00:21:17.436 fused_ordering(110) 00:21:17.436 fused_ordering(111) 00:21:17.436 fused_ordering(112) 00:21:17.436 fused_ordering(113) 00:21:17.436 fused_ordering(114) 00:21:17.436 fused_ordering(115) 00:21:17.436 fused_ordering(116) 00:21:17.436 fused_ordering(117) 00:21:17.436 fused_ordering(118) 00:21:17.436 fused_ordering(119) 00:21:17.436 fused_ordering(120) 00:21:17.436 fused_ordering(121) 00:21:17.436 fused_ordering(122) 00:21:17.436 fused_ordering(123) 00:21:17.436 fused_ordering(124) 00:21:17.436 fused_ordering(125) 00:21:17.436 fused_ordering(126) 00:21:17.436 fused_ordering(127) 00:21:17.436 fused_ordering(128) 00:21:17.436 fused_ordering(129) 00:21:17.436 fused_ordering(130) 00:21:17.436 fused_ordering(131) 00:21:17.436 fused_ordering(132) 00:21:17.436 fused_ordering(133) 00:21:17.436 fused_ordering(134) 00:21:17.436 fused_ordering(135) 00:21:17.436 fused_ordering(136) 00:21:17.436 fused_ordering(137) 00:21:17.436 fused_ordering(138) 00:21:17.436 fused_ordering(139) 00:21:17.436 fused_ordering(140) 00:21:17.436 fused_ordering(141) 00:21:17.436 fused_ordering(142) 00:21:17.436 fused_ordering(143) 00:21:17.436 fused_ordering(144) 00:21:17.436 fused_ordering(145) 00:21:17.436 fused_ordering(146) 00:21:17.436 fused_ordering(147) 00:21:17.436 fused_ordering(148) 00:21:17.436 fused_ordering(149) 00:21:17.436 fused_ordering(150) 00:21:17.436 fused_ordering(151) 00:21:17.436 fused_ordering(152) 00:21:17.436 fused_ordering(153) 00:21:17.436 fused_ordering(154) 00:21:17.436 fused_ordering(155) 00:21:17.436 fused_ordering(156) 00:21:17.436 fused_ordering(157) 00:21:17.436 fused_ordering(158) 00:21:17.436 fused_ordering(159) 00:21:17.436 fused_ordering(160) 00:21:17.436 fused_ordering(161) 00:21:17.436 fused_ordering(162) 00:21:17.436 fused_ordering(163) 00:21:17.436 fused_ordering(164) 00:21:17.436 fused_ordering(165) 00:21:17.436 fused_ordering(166) 00:21:17.436 fused_ordering(167) 00:21:17.436 fused_ordering(168) 00:21:17.436 fused_ordering(169) 00:21:17.436 fused_ordering(170) 00:21:17.436 fused_ordering(171) 00:21:17.436 fused_ordering(172) 00:21:17.436 fused_ordering(173) 00:21:17.436 fused_ordering(174) 00:21:17.436 fused_ordering(175) 00:21:17.436 fused_ordering(176) 00:21:17.436 fused_ordering(177) 00:21:17.436 fused_ordering(178) 00:21:17.436 fused_ordering(179) 00:21:17.436 fused_ordering(180) 00:21:17.436 fused_ordering(181) 00:21:17.436 fused_ordering(182) 00:21:17.436 fused_ordering(183) 00:21:17.436 fused_ordering(184) 00:21:17.436 fused_ordering(185) 00:21:17.436 fused_ordering(186) 00:21:17.436 fused_ordering(187) 00:21:17.436 fused_ordering(188) 00:21:17.436 fused_ordering(189) 00:21:17.436 fused_ordering(190) 00:21:17.436 fused_ordering(191) 00:21:17.436 fused_ordering(192) 00:21:17.436 fused_ordering(193) 00:21:17.436 fused_ordering(194) 00:21:17.436 fused_ordering(195) 00:21:17.436 fused_ordering(196) 00:21:17.436 fused_ordering(197) 00:21:17.436 fused_ordering(198) 00:21:17.436 fused_ordering(199) 00:21:17.436 fused_ordering(200) 00:21:17.436 fused_ordering(201) 00:21:17.436 fused_ordering(202) 00:21:17.436 fused_ordering(203) 00:21:17.436 fused_ordering(204) 00:21:17.436 fused_ordering(205) 00:21:17.696 fused_ordering(206) 00:21:17.696 fused_ordering(207) 00:21:17.696 fused_ordering(208) 00:21:17.696 fused_ordering(209) 00:21:17.696 fused_ordering(210) 00:21:17.696 fused_ordering(211) 00:21:17.696 fused_ordering(212) 00:21:17.696 fused_ordering(213) 00:21:17.696 fused_ordering(214) 00:21:17.696 fused_ordering(215) 00:21:17.696 fused_ordering(216) 00:21:17.696 fused_ordering(217) 00:21:17.696 fused_ordering(218) 00:21:17.696 fused_ordering(219) 00:21:17.696 fused_ordering(220) 00:21:17.696 fused_ordering(221) 00:21:17.696 fused_ordering(222) 00:21:17.696 fused_ordering(223) 00:21:17.696 fused_ordering(224) 00:21:17.696 fused_ordering(225) 00:21:17.696 fused_ordering(226) 00:21:17.696 fused_ordering(227) 00:21:17.696 fused_ordering(228) 00:21:17.696 fused_ordering(229) 00:21:17.696 fused_ordering(230) 00:21:17.696 fused_ordering(231) 00:21:17.696 fused_ordering(232) 00:21:17.696 fused_ordering(233) 00:21:17.696 fused_ordering(234) 00:21:17.696 fused_ordering(235) 00:21:17.696 fused_ordering(236) 00:21:17.696 fused_ordering(237) 00:21:17.696 fused_ordering(238) 00:21:17.696 fused_ordering(239) 00:21:17.696 fused_ordering(240) 00:21:17.696 fused_ordering(241) 00:21:17.696 fused_ordering(242) 00:21:17.696 fused_ordering(243) 00:21:17.696 fused_ordering(244) 00:21:17.696 fused_ordering(245) 00:21:17.696 fused_ordering(246) 00:21:17.696 fused_ordering(247) 00:21:17.696 fused_ordering(248) 00:21:17.696 fused_ordering(249) 00:21:17.696 fused_ordering(250) 00:21:17.696 fused_ordering(251) 00:21:17.696 fused_ordering(252) 00:21:17.696 fused_ordering(253) 00:21:17.696 fused_ordering(254) 00:21:17.696 fused_ordering(255) 00:21:17.696 fused_ordering(256) 00:21:17.696 fused_ordering(257) 00:21:17.696 fused_ordering(258) 00:21:17.696 fused_ordering(259) 00:21:17.696 fused_ordering(260) 00:21:17.696 fused_ordering(261) 00:21:17.696 fused_ordering(262) 00:21:17.696 fused_ordering(263) 00:21:17.696 fused_ordering(264) 00:21:17.696 fused_ordering(265) 00:21:17.696 fused_ordering(266) 00:21:17.696 fused_ordering(267) 00:21:17.696 fused_ordering(268) 00:21:17.696 fused_ordering(269) 00:21:17.696 fused_ordering(270) 00:21:17.696 fused_ordering(271) 00:21:17.696 fused_ordering(272) 00:21:17.696 fused_ordering(273) 00:21:17.696 fused_ordering(274) 00:21:17.696 fused_ordering(275) 00:21:17.696 fused_ordering(276) 00:21:17.696 fused_ordering(277) 00:21:17.696 fused_ordering(278) 00:21:17.696 fused_ordering(279) 00:21:17.696 fused_ordering(280) 00:21:17.696 fused_ordering(281) 00:21:17.696 fused_ordering(282) 00:21:17.696 fused_ordering(283) 00:21:17.696 fused_ordering(284) 00:21:17.696 fused_ordering(285) 00:21:17.696 fused_ordering(286) 00:21:17.696 fused_ordering(287) 00:21:17.696 fused_ordering(288) 00:21:17.696 fused_ordering(289) 00:21:17.696 fused_ordering(290) 00:21:17.696 fused_ordering(291) 00:21:17.696 fused_ordering(292) 00:21:17.696 fused_ordering(293) 00:21:17.696 fused_ordering(294) 00:21:17.696 fused_ordering(295) 00:21:17.696 fused_ordering(296) 00:21:17.696 fused_ordering(297) 00:21:17.696 fused_ordering(298) 00:21:17.696 fused_ordering(299) 00:21:17.696 fused_ordering(300) 00:21:17.696 fused_ordering(301) 00:21:17.696 fused_ordering(302) 00:21:17.696 fused_ordering(303) 00:21:17.696 fused_ordering(304) 00:21:17.696 fused_ordering(305) 00:21:17.696 fused_ordering(306) 00:21:17.696 fused_ordering(307) 00:21:17.696 fused_ordering(308) 00:21:17.696 fused_ordering(309) 00:21:17.696 fused_ordering(310) 00:21:17.696 fused_ordering(311) 00:21:17.696 fused_ordering(312) 00:21:17.696 fused_ordering(313) 00:21:17.696 fused_ordering(314) 00:21:17.696 fused_ordering(315) 00:21:17.696 fused_ordering(316) 00:21:17.696 fused_ordering(317) 00:21:17.696 fused_ordering(318) 00:21:17.696 fused_ordering(319) 00:21:17.696 fused_ordering(320) 00:21:17.696 fused_ordering(321) 00:21:17.696 fused_ordering(322) 00:21:17.696 fused_ordering(323) 00:21:17.696 fused_ordering(324) 00:21:17.696 fused_ordering(325) 00:21:17.696 fused_ordering(326) 00:21:17.696 fused_ordering(327) 00:21:17.696 fused_ordering(328) 00:21:17.696 fused_ordering(329) 00:21:17.696 fused_ordering(330) 00:21:17.696 fused_ordering(331) 00:21:17.696 fused_ordering(332) 00:21:17.696 fused_ordering(333) 00:21:17.696 fused_ordering(334) 00:21:17.696 fused_ordering(335) 00:21:17.696 fused_ordering(336) 00:21:17.696 fused_ordering(337) 00:21:17.696 fused_ordering(338) 00:21:17.696 fused_ordering(339) 00:21:17.696 fused_ordering(340) 00:21:17.696 fused_ordering(341) 00:21:17.696 fused_ordering(342) 00:21:17.696 fused_ordering(343) 00:21:17.696 fused_ordering(344) 00:21:17.696 fused_ordering(345) 00:21:17.696 fused_ordering(346) 00:21:17.696 fused_ordering(347) 00:21:17.696 fused_ordering(348) 00:21:17.696 fused_ordering(349) 00:21:17.696 fused_ordering(350) 00:21:17.696 fused_ordering(351) 00:21:17.696 fused_ordering(352) 00:21:17.696 fused_ordering(353) 00:21:17.696 fused_ordering(354) 00:21:17.696 fused_ordering(355) 00:21:17.696 fused_ordering(356) 00:21:17.696 fused_ordering(357) 00:21:17.696 fused_ordering(358) 00:21:17.696 fused_ordering(359) 00:21:17.696 fused_ordering(360) 00:21:17.696 fused_ordering(361) 00:21:17.696 fused_ordering(362) 00:21:17.696 fused_ordering(363) 00:21:17.696 fused_ordering(364) 00:21:17.696 fused_ordering(365) 00:21:17.696 fused_ordering(366) 00:21:17.696 fused_ordering(367) 00:21:17.696 fused_ordering(368) 00:21:17.696 fused_ordering(369) 00:21:17.696 fused_ordering(370) 00:21:17.696 fused_ordering(371) 00:21:17.696 fused_ordering(372) 00:21:17.696 fused_ordering(373) 00:21:17.696 fused_ordering(374) 00:21:17.696 fused_ordering(375) 00:21:17.696 fused_ordering(376) 00:21:17.696 fused_ordering(377) 00:21:17.696 fused_ordering(378) 00:21:17.696 fused_ordering(379) 00:21:17.696 fused_ordering(380) 00:21:17.696 fused_ordering(381) 00:21:17.696 fused_ordering(382) 00:21:17.696 fused_ordering(383) 00:21:17.696 fused_ordering(384) 00:21:17.696 fused_ordering(385) 00:21:17.696 fused_ordering(386) 00:21:17.696 fused_ordering(387) 00:21:17.697 fused_ordering(388) 00:21:17.697 fused_ordering(389) 00:21:17.697 fused_ordering(390) 00:21:17.697 fused_ordering(391) 00:21:17.697 fused_ordering(392) 00:21:17.697 fused_ordering(393) 00:21:17.697 fused_ordering(394) 00:21:17.697 fused_ordering(395) 00:21:17.697 fused_ordering(396) 00:21:17.697 fused_ordering(397) 00:21:17.697 fused_ordering(398) 00:21:17.697 fused_ordering(399) 00:21:17.697 fused_ordering(400) 00:21:17.697 fused_ordering(401) 00:21:17.697 fused_ordering(402) 00:21:17.697 fused_ordering(403) 00:21:17.697 fused_ordering(404) 00:21:17.697 fused_ordering(405) 00:21:17.697 fused_ordering(406) 00:21:17.697 fused_ordering(407) 00:21:17.697 fused_ordering(408) 00:21:17.697 fused_ordering(409) 00:21:17.697 fused_ordering(410) 00:21:18.267 fused_ordering(411) 00:21:18.267 fused_ordering(412) 00:21:18.267 fused_ordering(413) 00:21:18.267 fused_ordering(414) 00:21:18.267 fused_ordering(415) 00:21:18.267 fused_ordering(416) 00:21:18.267 fused_ordering(417) 00:21:18.267 fused_ordering(418) 00:21:18.267 fused_ordering(419) 00:21:18.267 fused_ordering(420) 00:21:18.267 fused_ordering(421) 00:21:18.267 fused_ordering(422) 00:21:18.267 fused_ordering(423) 00:21:18.267 fused_ordering(424) 00:21:18.267 fused_ordering(425) 00:21:18.267 fused_ordering(426) 00:21:18.267 fused_ordering(427) 00:21:18.267 fused_ordering(428) 00:21:18.267 fused_ordering(429) 00:21:18.267 fused_ordering(430) 00:21:18.267 fused_ordering(431) 00:21:18.267 fused_ordering(432) 00:21:18.267 fused_ordering(433) 00:21:18.267 fused_ordering(434) 00:21:18.267 fused_ordering(435) 00:21:18.267 fused_ordering(436) 00:21:18.267 fused_ordering(437) 00:21:18.267 fused_ordering(438) 00:21:18.267 fused_ordering(439) 00:21:18.267 fused_ordering(440) 00:21:18.267 fused_ordering(441) 00:21:18.267 fused_ordering(442) 00:21:18.267 fused_ordering(443) 00:21:18.267 fused_ordering(444) 00:21:18.267 fused_ordering(445) 00:21:18.267 fused_ordering(446) 00:21:18.267 fused_ordering(447) 00:21:18.267 fused_ordering(448) 00:21:18.267 fused_ordering(449) 00:21:18.267 fused_ordering(450) 00:21:18.267 fused_ordering(451) 00:21:18.267 fused_ordering(452) 00:21:18.267 fused_ordering(453) 00:21:18.267 fused_ordering(454) 00:21:18.267 fused_ordering(455) 00:21:18.267 fused_ordering(456) 00:21:18.267 fused_ordering(457) 00:21:18.267 fused_ordering(458) 00:21:18.267 fused_ordering(459) 00:21:18.267 fused_ordering(460) 00:21:18.267 fused_ordering(461) 00:21:18.267 fused_ordering(462) 00:21:18.267 fused_ordering(463) 00:21:18.267 fused_ordering(464) 00:21:18.267 fused_ordering(465) 00:21:18.267 fused_ordering(466) 00:21:18.267 fused_ordering(467) 00:21:18.267 fused_ordering(468) 00:21:18.267 fused_ordering(469) 00:21:18.267 fused_ordering(470) 00:21:18.267 fused_ordering(471) 00:21:18.267 fused_ordering(472) 00:21:18.267 fused_ordering(473) 00:21:18.267 fused_ordering(474) 00:21:18.267 fused_ordering(475) 00:21:18.267 fused_ordering(476) 00:21:18.267 fused_ordering(477) 00:21:18.267 fused_ordering(478) 00:21:18.267 fused_ordering(479) 00:21:18.267 fused_ordering(480) 00:21:18.267 fused_ordering(481) 00:21:18.267 fused_ordering(482) 00:21:18.267 fused_ordering(483) 00:21:18.267 fused_ordering(484) 00:21:18.267 fused_ordering(485) 00:21:18.267 fused_ordering(486) 00:21:18.267 fused_ordering(487) 00:21:18.267 fused_ordering(488) 00:21:18.267 fused_ordering(489) 00:21:18.267 fused_ordering(490) 00:21:18.267 fused_ordering(491) 00:21:18.267 fused_ordering(492) 00:21:18.267 fused_ordering(493) 00:21:18.267 fused_ordering(494) 00:21:18.267 fused_ordering(495) 00:21:18.267 fused_ordering(496) 00:21:18.267 fused_ordering(497) 00:21:18.267 fused_ordering(498) 00:21:18.267 fused_ordering(499) 00:21:18.267 fused_ordering(500) 00:21:18.267 fused_ordering(501) 00:21:18.267 fused_ordering(502) 00:21:18.267 fused_ordering(503) 00:21:18.267 fused_ordering(504) 00:21:18.267 fused_ordering(505) 00:21:18.267 fused_ordering(506) 00:21:18.267 fused_ordering(507) 00:21:18.267 fused_ordering(508) 00:21:18.267 fused_ordering(509) 00:21:18.267 fused_ordering(510) 00:21:18.267 fused_ordering(511) 00:21:18.267 fused_ordering(512) 00:21:18.267 fused_ordering(513) 00:21:18.267 fused_ordering(514) 00:21:18.267 fused_ordering(515) 00:21:18.267 fused_ordering(516) 00:21:18.267 fused_ordering(517) 00:21:18.267 fused_ordering(518) 00:21:18.267 fused_ordering(519) 00:21:18.267 fused_ordering(520) 00:21:18.267 fused_ordering(521) 00:21:18.267 fused_ordering(522) 00:21:18.267 fused_ordering(523) 00:21:18.267 fused_ordering(524) 00:21:18.267 fused_ordering(525) 00:21:18.267 fused_ordering(526) 00:21:18.267 fused_ordering(527) 00:21:18.267 fused_ordering(528) 00:21:18.267 fused_ordering(529) 00:21:18.267 fused_ordering(530) 00:21:18.267 fused_ordering(531) 00:21:18.267 fused_ordering(532) 00:21:18.267 fused_ordering(533) 00:21:18.267 fused_ordering(534) 00:21:18.267 fused_ordering(535) 00:21:18.267 fused_ordering(536) 00:21:18.267 fused_ordering(537) 00:21:18.267 fused_ordering(538) 00:21:18.267 fused_ordering(539) 00:21:18.267 fused_ordering(540) 00:21:18.267 fused_ordering(541) 00:21:18.267 fused_ordering(542) 00:21:18.267 fused_ordering(543) 00:21:18.267 fused_ordering(544) 00:21:18.267 fused_ordering(545) 00:21:18.267 fused_ordering(546) 00:21:18.267 fused_ordering(547) 00:21:18.267 fused_ordering(548) 00:21:18.267 fused_ordering(549) 00:21:18.267 fused_ordering(550) 00:21:18.267 fused_ordering(551) 00:21:18.267 fused_ordering(552) 00:21:18.267 fused_ordering(553) 00:21:18.267 fused_ordering(554) 00:21:18.267 fused_ordering(555) 00:21:18.267 fused_ordering(556) 00:21:18.267 fused_ordering(557) 00:21:18.267 fused_ordering(558) 00:21:18.267 fused_ordering(559) 00:21:18.267 fused_ordering(560) 00:21:18.267 fused_ordering(561) 00:21:18.267 fused_ordering(562) 00:21:18.267 fused_ordering(563) 00:21:18.267 fused_ordering(564) 00:21:18.267 fused_ordering(565) 00:21:18.267 fused_ordering(566) 00:21:18.267 fused_ordering(567) 00:21:18.267 fused_ordering(568) 00:21:18.267 fused_ordering(569) 00:21:18.267 fused_ordering(570) 00:21:18.267 fused_ordering(571) 00:21:18.267 fused_ordering(572) 00:21:18.267 fused_ordering(573) 00:21:18.267 fused_ordering(574) 00:21:18.267 fused_ordering(575) 00:21:18.267 fused_ordering(576) 00:21:18.267 fused_ordering(577) 00:21:18.267 fused_ordering(578) 00:21:18.267 fused_ordering(579) 00:21:18.267 fused_ordering(580) 00:21:18.267 fused_ordering(581) 00:21:18.267 fused_ordering(582) 00:21:18.267 fused_ordering(583) 00:21:18.268 fused_ordering(584) 00:21:18.268 fused_ordering(585) 00:21:18.268 fused_ordering(586) 00:21:18.268 fused_ordering(587) 00:21:18.268 fused_ordering(588) 00:21:18.268 fused_ordering(589) 00:21:18.268 fused_ordering(590) 00:21:18.268 fused_ordering(591) 00:21:18.268 fused_ordering(592) 00:21:18.268 fused_ordering(593) 00:21:18.268 fused_ordering(594) 00:21:18.268 fused_ordering(595) 00:21:18.268 fused_ordering(596) 00:21:18.268 fused_ordering(597) 00:21:18.268 fused_ordering(598) 00:21:18.268 fused_ordering(599) 00:21:18.268 fused_ordering(600) 00:21:18.268 fused_ordering(601) 00:21:18.268 fused_ordering(602) 00:21:18.268 fused_ordering(603) 00:21:18.268 fused_ordering(604) 00:21:18.268 fused_ordering(605) 00:21:18.268 fused_ordering(606) 00:21:18.268 fused_ordering(607) 00:21:18.268 fused_ordering(608) 00:21:18.268 fused_ordering(609) 00:21:18.268 fused_ordering(610) 00:21:18.268 fused_ordering(611) 00:21:18.268 fused_ordering(612) 00:21:18.268 fused_ordering(613) 00:21:18.268 fused_ordering(614) 00:21:18.268 fused_ordering(615) 00:21:18.839 fused_ordering(616) 00:21:18.839 fused_ordering(617) 00:21:18.839 fused_ordering(618) 00:21:18.839 fused_ordering(619) 00:21:18.839 fused_ordering(620) 00:21:18.839 fused_ordering(621) 00:21:18.839 fused_ordering(622) 00:21:18.839 fused_ordering(623) 00:21:18.839 fused_ordering(624) 00:21:18.839 fused_ordering(625) 00:21:18.839 fused_ordering(626) 00:21:18.839 fused_ordering(627) 00:21:18.839 fused_ordering(628) 00:21:18.839 fused_ordering(629) 00:21:18.839 fused_ordering(630) 00:21:18.839 fused_ordering(631) 00:21:18.839 fused_ordering(632) 00:21:18.839 fused_ordering(633) 00:21:18.839 fused_ordering(634) 00:21:18.839 fused_ordering(635) 00:21:18.839 fused_ordering(636) 00:21:18.839 fused_ordering(637) 00:21:18.839 fused_ordering(638) 00:21:18.839 fused_ordering(639) 00:21:18.839 fused_ordering(640) 00:21:18.839 fused_ordering(641) 00:21:18.839 fused_ordering(642) 00:21:18.839 fused_ordering(643) 00:21:18.839 fused_ordering(644) 00:21:18.839 fused_ordering(645) 00:21:18.839 fused_ordering(646) 00:21:18.839 fused_ordering(647) 00:21:18.839 fused_ordering(648) 00:21:18.839 fused_ordering(649) 00:21:18.839 fused_ordering(650) 00:21:18.839 fused_ordering(651) 00:21:18.839 fused_ordering(652) 00:21:18.839 fused_ordering(653) 00:21:18.839 fused_ordering(654) 00:21:18.839 fused_ordering(655) 00:21:18.839 fused_ordering(656) 00:21:18.839 fused_ordering(657) 00:21:18.839 fused_ordering(658) 00:21:18.839 fused_ordering(659) 00:21:18.839 fused_ordering(660) 00:21:18.839 fused_ordering(661) 00:21:18.839 fused_ordering(662) 00:21:18.839 fused_ordering(663) 00:21:18.839 fused_ordering(664) 00:21:18.839 fused_ordering(665) 00:21:18.839 fused_ordering(666) 00:21:18.839 fused_ordering(667) 00:21:18.839 fused_ordering(668) 00:21:18.839 fused_ordering(669) 00:21:18.839 fused_ordering(670) 00:21:18.839 fused_ordering(671) 00:21:18.839 fused_ordering(672) 00:21:18.839 fused_ordering(673) 00:21:18.839 fused_ordering(674) 00:21:18.839 fused_ordering(675) 00:21:18.839 fused_ordering(676) 00:21:18.839 fused_ordering(677) 00:21:18.839 fused_ordering(678) 00:21:18.839 fused_ordering(679) 00:21:18.839 fused_ordering(680) 00:21:18.839 fused_ordering(681) 00:21:18.839 fused_ordering(682) 00:21:18.839 fused_ordering(683) 00:21:18.839 fused_ordering(684) 00:21:18.839 fused_ordering(685) 00:21:18.839 fused_ordering(686) 00:21:18.839 fused_ordering(687) 00:21:18.839 fused_ordering(688) 00:21:18.839 fused_ordering(689) 00:21:18.839 fused_ordering(690) 00:21:18.839 fused_ordering(691) 00:21:18.839 fused_ordering(692) 00:21:18.839 fused_ordering(693) 00:21:18.839 fused_ordering(694) 00:21:18.839 fused_ordering(695) 00:21:18.839 fused_ordering(696) 00:21:18.839 fused_ordering(697) 00:21:18.839 fused_ordering(698) 00:21:18.839 fused_ordering(699) 00:21:18.839 fused_ordering(700) 00:21:18.839 fused_ordering(701) 00:21:18.839 fused_ordering(702) 00:21:18.839 fused_ordering(703) 00:21:18.839 fused_ordering(704) 00:21:18.839 fused_ordering(705) 00:21:18.839 fused_ordering(706) 00:21:18.839 fused_ordering(707) 00:21:18.839 fused_ordering(708) 00:21:18.839 fused_ordering(709) 00:21:18.839 fused_ordering(710) 00:21:18.839 fused_ordering(711) 00:21:18.839 fused_ordering(712) 00:21:18.839 fused_ordering(713) 00:21:18.839 fused_ordering(714) 00:21:18.839 fused_ordering(715) 00:21:18.839 fused_ordering(716) 00:21:18.839 fused_ordering(717) 00:21:18.839 fused_ordering(718) 00:21:18.839 fused_ordering(719) 00:21:18.839 fused_ordering(720) 00:21:18.839 fused_ordering(721) 00:21:18.839 fused_ordering(722) 00:21:18.839 fused_ordering(723) 00:21:18.839 fused_ordering(724) 00:21:18.839 fused_ordering(725) 00:21:18.839 fused_ordering(726) 00:21:18.839 fused_ordering(727) 00:21:18.839 fused_ordering(728) 00:21:18.839 fused_ordering(729) 00:21:18.839 fused_ordering(730) 00:21:18.839 fused_ordering(731) 00:21:18.839 fused_ordering(732) 00:21:18.839 fused_ordering(733) 00:21:18.839 fused_ordering(734) 00:21:18.839 fused_ordering(735) 00:21:18.839 fused_ordering(736) 00:21:18.840 fused_ordering(737) 00:21:18.840 fused_ordering(738) 00:21:18.840 fused_ordering(739) 00:21:18.840 fused_ordering(740) 00:21:18.840 fused_ordering(741) 00:21:18.840 fused_ordering(742) 00:21:18.840 fused_ordering(743) 00:21:18.840 fused_ordering(744) 00:21:18.840 fused_ordering(745) 00:21:18.840 fused_ordering(746) 00:21:18.840 fused_ordering(747) 00:21:18.840 fused_ordering(748) 00:21:18.840 fused_ordering(749) 00:21:18.840 fused_ordering(750) 00:21:18.840 fused_ordering(751) 00:21:18.840 fused_ordering(752) 00:21:18.840 fused_ordering(753) 00:21:18.840 fused_ordering(754) 00:21:18.840 fused_ordering(755) 00:21:18.840 fused_ordering(756) 00:21:18.840 fused_ordering(757) 00:21:18.840 fused_ordering(758) 00:21:18.840 fused_ordering(759) 00:21:18.840 fused_ordering(760) 00:21:18.840 fused_ordering(761) 00:21:18.840 fused_ordering(762) 00:21:18.840 fused_ordering(763) 00:21:18.840 fused_ordering(764) 00:21:18.840 fused_ordering(765) 00:21:18.840 fused_ordering(766) 00:21:18.840 fused_ordering(767) 00:21:18.840 fused_ordering(768) 00:21:18.840 fused_ordering(769) 00:21:18.840 fused_ordering(770) 00:21:18.840 fused_ordering(771) 00:21:18.840 fused_ordering(772) 00:21:18.840 fused_ordering(773) 00:21:18.840 fused_ordering(774) 00:21:18.840 fused_ordering(775) 00:21:18.840 fused_ordering(776) 00:21:18.840 fused_ordering(777) 00:21:18.840 fused_ordering(778) 00:21:18.840 fused_ordering(779) 00:21:18.840 fused_ordering(780) 00:21:18.840 fused_ordering(781) 00:21:18.840 fused_ordering(782) 00:21:18.840 fused_ordering(783) 00:21:18.840 fused_ordering(784) 00:21:18.840 fused_ordering(785) 00:21:18.840 fused_ordering(786) 00:21:18.840 fused_ordering(787) 00:21:18.840 fused_ordering(788) 00:21:18.840 fused_ordering(789) 00:21:18.840 fused_ordering(790) 00:21:18.840 fused_ordering(791) 00:21:18.840 fused_ordering(792) 00:21:18.840 fused_ordering(793) 00:21:18.840 fused_ordering(794) 00:21:18.840 fused_ordering(795) 00:21:18.840 fused_ordering(796) 00:21:18.840 fused_ordering(797) 00:21:18.840 fused_ordering(798) 00:21:18.840 fused_ordering(799) 00:21:18.840 fused_ordering(800) 00:21:18.840 fused_ordering(801) 00:21:18.840 fused_ordering(802) 00:21:18.840 fused_ordering(803) 00:21:18.840 fused_ordering(804) 00:21:18.840 fused_ordering(805) 00:21:18.840 fused_ordering(806) 00:21:18.840 fused_ordering(807) 00:21:18.840 fused_ordering(808) 00:21:18.840 fused_ordering(809) 00:21:18.840 fused_ordering(810) 00:21:18.840 fused_ordering(811) 00:21:18.840 fused_ordering(812) 00:21:18.840 fused_ordering(813) 00:21:18.840 fused_ordering(814) 00:21:18.840 fused_ordering(815) 00:21:18.840 fused_ordering(816) 00:21:18.840 fused_ordering(817) 00:21:18.840 fused_ordering(818) 00:21:18.840 fused_ordering(819) 00:21:18.840 fused_ordering(820) 00:21:19.782 fused_ordering(821) 00:21:19.782 fused_ordering(822) 00:21:19.782 fused_ordering(823) 00:21:19.782 fused_ordering(824) 00:21:19.782 fused_ordering(825) 00:21:19.782 fused_ordering(826) 00:21:19.782 fused_ordering(827) 00:21:19.782 fused_ordering(828) 00:21:19.782 fused_ordering(829) 00:21:19.782 fused_ordering(830) 00:21:19.782 fused_ordering(831) 00:21:19.782 fused_ordering(832) 00:21:19.782 fused_ordering(833) 00:21:19.782 fused_ordering(834) 00:21:19.782 fused_ordering(835) 00:21:19.782 fused_ordering(836) 00:21:19.782 fused_ordering(837) 00:21:19.782 fused_ordering(838) 00:21:19.782 fused_ordering(839) 00:21:19.782 fused_ordering(840) 00:21:19.782 fused_ordering(841) 00:21:19.782 fused_ordering(842) 00:21:19.782 fused_ordering(843) 00:21:19.782 fused_ordering(844) 00:21:19.782 fused_ordering(845) 00:21:19.782 fused_ordering(846) 00:21:19.782 fused_ordering(847) 00:21:19.782 fused_ordering(848) 00:21:19.782 fused_ordering(849) 00:21:19.782 fused_ordering(850) 00:21:19.782 fused_ordering(851) 00:21:19.782 fused_ordering(852) 00:21:19.782 fused_ordering(853) 00:21:19.782 fused_ordering(854) 00:21:19.782 fused_ordering(855) 00:21:19.782 fused_ordering(856) 00:21:19.782 fused_ordering(857) 00:21:19.782 fused_ordering(858) 00:21:19.782 fused_ordering(859) 00:21:19.782 fused_ordering(860) 00:21:19.782 fused_ordering(861) 00:21:19.782 fused_ordering(862) 00:21:19.782 fused_ordering(863) 00:21:19.782 fused_ordering(864) 00:21:19.782 fused_ordering(865) 00:21:19.782 fused_ordering(866) 00:21:19.782 fused_ordering(867) 00:21:19.782 fused_ordering(868) 00:21:19.782 fused_ordering(869) 00:21:19.782 fused_ordering(870) 00:21:19.782 fused_ordering(871) 00:21:19.782 fused_ordering(872) 00:21:19.782 fused_ordering(873) 00:21:19.782 fused_ordering(874) 00:21:19.782 fused_ordering(875) 00:21:19.782 fused_ordering(876) 00:21:19.782 fused_ordering(877) 00:21:19.782 fused_ordering(878) 00:21:19.782 fused_ordering(879) 00:21:19.782 fused_ordering(880) 00:21:19.782 fused_ordering(881) 00:21:19.782 fused_ordering(882) 00:21:19.782 fused_ordering(883) 00:21:19.782 fused_ordering(884) 00:21:19.782 fused_ordering(885) 00:21:19.782 fused_ordering(886) 00:21:19.782 fused_ordering(887) 00:21:19.782 fused_ordering(888) 00:21:19.782 fused_ordering(889) 00:21:19.782 fused_ordering(890) 00:21:19.782 fused_ordering(891) 00:21:19.782 fused_ordering(892) 00:21:19.782 fused_ordering(893) 00:21:19.782 fused_ordering(894) 00:21:19.782 fused_ordering(895) 00:21:19.782 fused_ordering(896) 00:21:19.782 fused_ordering(897) 00:21:19.782 fused_ordering(898) 00:21:19.782 fused_ordering(899) 00:21:19.782 fused_ordering(900) 00:21:19.782 fused_ordering(901) 00:21:19.782 fused_ordering(902) 00:21:19.782 fused_ordering(903) 00:21:19.782 fused_ordering(904) 00:21:19.782 fused_ordering(905) 00:21:19.782 fused_ordering(906) 00:21:19.782 fused_ordering(907) 00:21:19.782 fused_ordering(908) 00:21:19.782 fused_ordering(909) 00:21:19.782 fused_ordering(910) 00:21:19.782 fused_ordering(911) 00:21:19.782 fused_ordering(912) 00:21:19.782 fused_ordering(913) 00:21:19.782 fused_ordering(914) 00:21:19.782 fused_ordering(915) 00:21:19.782 fused_ordering(916) 00:21:19.782 fused_ordering(917) 00:21:19.782 fused_ordering(918) 00:21:19.782 fused_ordering(919) 00:21:19.782 fused_ordering(920) 00:21:19.782 fused_ordering(921) 00:21:19.782 fused_ordering(922) 00:21:19.782 fused_ordering(923) 00:21:19.782 fused_ordering(924) 00:21:19.782 fused_ordering(925) 00:21:19.782 fused_ordering(926) 00:21:19.782 fused_ordering(927) 00:21:19.782 fused_ordering(928) 00:21:19.782 fused_ordering(929) 00:21:19.782 fused_ordering(930) 00:21:19.782 fused_ordering(931) 00:21:19.782 fused_ordering(932) 00:21:19.782 fused_ordering(933) 00:21:19.782 fused_ordering(934) 00:21:19.782 fused_ordering(935) 00:21:19.782 fused_ordering(936) 00:21:19.782 fused_ordering(937) 00:21:19.782 fused_ordering(938) 00:21:19.782 fused_ordering(939) 00:21:19.782 fused_ordering(940) 00:21:19.782 fused_ordering(941) 00:21:19.782 fused_ordering(942) 00:21:19.782 fused_ordering(943) 00:21:19.782 fused_ordering(944) 00:21:19.782 fused_ordering(945) 00:21:19.782 fused_ordering(946) 00:21:19.782 fused_ordering(947) 00:21:19.782 fused_ordering(948) 00:21:19.782 fused_ordering(949) 00:21:19.782 fused_ordering(950) 00:21:19.782 fused_ordering(951) 00:21:19.782 fused_ordering(952) 00:21:19.782 fused_ordering(953) 00:21:19.782 fused_ordering(954) 00:21:19.782 fused_ordering(955) 00:21:19.782 fused_ordering(956) 00:21:19.782 fused_ordering(957) 00:21:19.782 fused_ordering(958) 00:21:19.782 fused_ordering(959) 00:21:19.782 fused_ordering(960) 00:21:19.782 fused_ordering(961) 00:21:19.782 fused_ordering(962) 00:21:19.782 fused_ordering(963) 00:21:19.782 fused_ordering(964) 00:21:19.782 fused_ordering(965) 00:21:19.782 fused_ordering(966) 00:21:19.782 fused_ordering(967) 00:21:19.782 fused_ordering(968) 00:21:19.782 fused_ordering(969) 00:21:19.782 fused_ordering(970) 00:21:19.782 fused_ordering(971) 00:21:19.782 fused_ordering(972) 00:21:19.782 fused_ordering(973) 00:21:19.782 fused_ordering(974) 00:21:19.782 fused_ordering(975) 00:21:19.782 fused_ordering(976) 00:21:19.782 fused_ordering(977) 00:21:19.782 fused_ordering(978) 00:21:19.782 fused_ordering(979) 00:21:19.782 fused_ordering(980) 00:21:19.782 fused_ordering(981) 00:21:19.782 fused_ordering(982) 00:21:19.782 fused_ordering(983) 00:21:19.782 fused_ordering(984) 00:21:19.782 fused_ordering(985) 00:21:19.782 fused_ordering(986) 00:21:19.782 fused_ordering(987) 00:21:19.782 fused_ordering(988) 00:21:19.782 fused_ordering(989) 00:21:19.782 fused_ordering(990) 00:21:19.782 fused_ordering(991) 00:21:19.782 fused_ordering(992) 00:21:19.782 fused_ordering(993) 00:21:19.782 fused_ordering(994) 00:21:19.782 fused_ordering(995) 00:21:19.782 fused_ordering(996) 00:21:19.782 fused_ordering(997) 00:21:19.782 fused_ordering(998) 00:21:19.782 fused_ordering(999) 00:21:19.782 fused_ordering(1000) 00:21:19.782 fused_ordering(1001) 00:21:19.782 fused_ordering(1002) 00:21:19.782 fused_ordering(1003) 00:21:19.782 fused_ordering(1004) 00:21:19.782 fused_ordering(1005) 00:21:19.782 fused_ordering(1006) 00:21:19.782 fused_ordering(1007) 00:21:19.782 fused_ordering(1008) 00:21:19.782 fused_ordering(1009) 00:21:19.782 fused_ordering(1010) 00:21:19.782 fused_ordering(1011) 00:21:19.782 fused_ordering(1012) 00:21:19.782 fused_ordering(1013) 00:21:19.782 fused_ordering(1014) 00:21:19.782 fused_ordering(1015) 00:21:19.782 fused_ordering(1016) 00:21:19.782 fused_ordering(1017) 00:21:19.782 fused_ordering(1018) 00:21:19.782 fused_ordering(1019) 00:21:19.782 fused_ordering(1020) 00:21:19.782 fused_ordering(1021) 00:21:19.782 fused_ordering(1022) 00:21:19.782 fused_ordering(1023) 00:21:19.782 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:21:19.782 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:21:19.782 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.782 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:21:19.782 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.782 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:21:19.782 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.782 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.782 rmmod nvme_tcp 00:21:19.782 rmmod nvme_fabrics 00:21:19.782 rmmod nvme_keyring 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3617662 ']' 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3617662 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3617662 ']' 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3617662 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3617662 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3617662' 00:21:19.783 killing process with pid 3617662 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3617662 00:21:19.783 20:28:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3617662 00:21:20.353 20:28:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:20.353 20:28:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:20.353 20:28:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:20.353 20:28:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:20.353 20:28:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:20.353 20:28:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.353 20:28:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.353 20:28:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.265 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:22.265 00:21:22.265 real 0m14.007s 00:21:22.265 user 0m8.121s 00:21:22.265 sys 0m7.116s 00:21:22.265 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:22.265 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:22.265 ************************************ 00:21:22.265 END TEST nvmf_fused_ordering 00:21:22.265 ************************************ 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:22.526 ************************************ 00:21:22.526 START TEST nvmf_ns_masking 00:21:22.526 ************************************ 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:21:22.526 * Looking for test storage... 00:21:22.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.526 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2ecdb50e-ad6f-44f0-bd6a-5dd0fff33baf 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=009e02b3-b36e-4266-9c32-9454ae617c70 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e50f009e-b647-4330-b130-9617b25c508b 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:21:22.527 20:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.666 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:30.667 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:30.667 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:30.667 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:30.667 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:30.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:21:30.667 00:21:30.667 --- 10.0.0.2 ping statistics --- 00:21:30.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.667 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:21:30.667 00:21:30.667 --- 10.0.0.1 ping statistics --- 00:21:30.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.667 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3622674 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3622674 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3622674 ']' 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.667 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.668 20:28:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:30.668 [2024-07-22 20:28:41.623570] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:30.668 [2024-07-22 20:28:41.623696] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.668 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.668 [2024-07-22 20:28:41.756054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.668 [2024-07-22 20:28:41.939449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.668 [2024-07-22 20:28:41.939487] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.668 [2024-07-22 20:28:41.939500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.668 [2024-07-22 20:28:41.939509] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.668 [2024-07-22 20:28:41.939521] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.668 [2024-07-22 20:28:41.939551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.668 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.668 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:21:30.668 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.668 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.668 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:30.668 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.668 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:30.668 [2024-07-22 20:28:42.520548] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.668 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:21:30.668 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:21:30.668 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:30.928 Malloc1 00:21:30.928 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:30.928 Malloc2 00:21:30.928 20:28:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:31.189 20:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:21:31.189 20:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.449 [2024-07-22 20:28:43.339695] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.449 20:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:21:31.449 20:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e50f009e-b647-4330-b130-9617b25c508b -a 10.0.0.2 -s 4420 -i 4 00:21:31.709 20:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:21:31.709 20:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:21:31.709 20:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:31.709 20:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:31.709 20:28:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:21:33.621 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:33.622 [ 0]:0x1 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:33.622 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:33.882 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5ac66106a8fa44af8c178229dc2d7580 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5ac66106a8fa44af8c178229dc2d7580 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:33.883 [ 0]:0x1 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5ac66106a8fa44af8c178229dc2d7580 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5ac66106a8fa44af8c178229dc2d7580 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:33.883 [ 1]:0x2 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:33.883 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:34.143 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b026d969f3f6416198c66d2392119851 00:21:34.143 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b026d969f3f6416198c66d2392119851 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:34.144 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:21:34.144 20:28:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:34.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:34.405 20:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:34.405 20:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:21:34.665 20:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:21:34.665 20:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e50f009e-b647-4330-b130-9617b25c508b -a 10.0.0.2 -s 4420 -i 4 00:21:34.665 20:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:21:34.665 20:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:21:34.665 20:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:34.666 20:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:21:34.666 20:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:21:34.666 20:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:37.209 [ 0]:0x2 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b026d969f3f6416198c66d2392119851 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b026d969f3f6416198c66d2392119851 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:37.209 20:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:37.209 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:21:37.209 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:37.209 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:37.209 [ 0]:0x1 00:21:37.209 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:37.209 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:37.210 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5ac66106a8fa44af8c178229dc2d7580 00:21:37.210 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5ac66106a8fa44af8c178229dc2d7580 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:37.210 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:21:37.210 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:37.210 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:37.210 [ 1]:0x2 00:21:37.210 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:37.210 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:37.210 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b026d969f3f6416198c66d2392119851 00:21:37.210 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b026d969f3f6416198c66d2392119851 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:37.210 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:37.471 [ 0]:0x2 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b026d969f3f6416198c66d2392119851 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b026d969f3f6416198c66d2392119851 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:21:37.471 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:37.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:37.731 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:37.731 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:21:37.731 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e50f009e-b647-4330-b130-9617b25c508b -a 10.0.0.2 -s 4420 -i 4 00:21:37.990 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:37.990 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:21:37.990 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:37.990 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:21:37.990 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:21:37.990 20:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:21:39.953 20:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:39.953 20:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:39.953 20:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:39.953 20:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:21:39.953 20:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:39.953 20:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:21:39.953 20:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:39.953 20:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:40.218 [ 0]:0x1 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5ac66106a8fa44af8c178229dc2d7580 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5ac66106a8fa44af8c178229dc2d7580 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:40.218 [ 1]:0x2 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:40.218 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b026d969f3f6416198c66d2392119851 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b026d969f3f6416198c66d2392119851 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:40.479 [ 0]:0x2 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:40.479 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b026d969f3f6416198c66d2392119851 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b026d969f3f6416198c66d2392119851 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:40.739 [2024-07-22 20:28:52.677882] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:21:40.739 request: 00:21:40.739 { 00:21:40.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.739 "nsid": 2, 00:21:40.739 "host": "nqn.2016-06.io.spdk:host1", 00:21:40.739 "method": "nvmf_ns_remove_host", 00:21:40.739 "req_id": 1 00:21:40.739 } 00:21:40.739 Got JSON-RPC error response 00:21:40.739 response: 00:21:40.739 { 00:21:40.739 "code": -32602, 00:21:40.739 "message": "Invalid parameters" 00:21:40.739 } 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:40.739 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:40.999 [ 0]:0x2 00:21:40.999 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:40.999 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:40.999 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b026d969f3f6416198c66d2392119851 00:21:40.999 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b026d969f3f6416198c66d2392119851 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:40.999 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:21:40.999 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:40.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:40.999 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3625092 00:21:41.000 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.000 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:21:41.000 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3625092 /var/tmp/host.sock 00:21:41.000 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3625092 ']' 00:21:41.000 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:21:41.000 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.000 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:41.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:41.000 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.000 20:28:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:41.000 [2024-07-22 20:28:52.956739] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:41.000 [2024-07-22 20:28:52.956852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3625092 ] 00:21:41.000 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.260 [2024-07-22 20:28:53.081641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.260 [2024-07-22 20:28:53.257230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.831 20:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.831 20:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:21:41.831 20:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:42.091 20:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:42.351 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2ecdb50e-ad6f-44f0-bd6a-5dd0fff33baf 00:21:42.351 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:21:42.351 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2ECDB50EAD6F44F0BD6A5DD0FFF33BAF -i 00:21:42.351 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 009e02b3-b36e-4266-9c32-9454ae617c70 00:21:42.351 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:21:42.351 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 009E02B3B36E42669C329454AE617C70 -i 00:21:42.610 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:42.610 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:21:42.869 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:42.869 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:43.128 nvme0n1 00:21:43.128 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:43.128 20:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:43.388 nvme1n2 00:21:43.388 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:21:43.388 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:43.388 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:21:43.388 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:21:43.388 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:21:43.649 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:21:43.649 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:21:43.649 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:21:43.649 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:21:43.649 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2ecdb50e-ad6f-44f0-bd6a-5dd0fff33baf == \2\e\c\d\b\5\0\e\-\a\d\6\f\-\4\4\f\0\-\b\d\6\a\-\5\d\d\0\f\f\f\3\3\b\a\f ]] 00:21:43.649 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:21:43.649 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:21:43.649 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:21:43.909 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 009e02b3-b36e-4266-9c32-9454ae617c70 == \0\0\9\e\0\2\b\3\-\b\3\6\e\-\4\2\6\6\-\9\c\3\2\-\9\4\5\4\a\e\6\1\7\c\7\0 ]] 00:21:43.909 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3625092 00:21:43.909 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3625092 ']' 00:21:43.909 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3625092 00:21:43.909 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:21:43.909 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.909 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3625092 00:21:43.909 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:43.909 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:43.910 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3625092' 00:21:43.910 killing process with pid 3625092 00:21:43.910 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3625092 00:21:43.910 20:28:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3625092 00:21:45.291 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.550 rmmod nvme_tcp 00:21:45.550 rmmod nvme_fabrics 00:21:45.550 rmmod nvme_keyring 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3622674 ']' 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3622674 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3622674 ']' 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3622674 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3622674 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3622674' 00:21:45.550 killing process with pid 3622674 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3622674 00:21:45.550 20:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3622674 00:21:46.930 20:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:46.931 20:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:46.931 20:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:46.931 20:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.931 20:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:46.931 20:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.931 20:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.931 20:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.840 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:48.841 00:21:48.841 real 0m26.330s 00:21:48.841 user 0m27.253s 00:21:48.841 sys 0m7.342s 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:48.841 ************************************ 00:21:48.841 END TEST nvmf_ns_masking 00:21:48.841 ************************************ 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:48.841 ************************************ 00:21:48.841 START TEST nvmf_nvme_cli 00:21:48.841 ************************************ 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:21:48.841 * Looking for test storage... 00:21:48.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.841 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:49.101 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:49.102 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:21:49.102 20:29:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:55.711 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:55.711 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:55.711 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:55.711 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.711 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.712 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:21:55.973 00:21:55.973 --- 10.0.0.2 ping statistics --- 00:21:55.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.973 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.473 ms 00:21:55.973 00:21:55.973 --- 10.0.0.1 ping statistics --- 00:21:55.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.973 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3630209 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3630209 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3630209 ']' 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:55.973 20:29:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:56.233 [2024-07-22 20:29:08.028733] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:56.233 [2024-07-22 20:29:08.028859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.233 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.233 [2024-07-22 20:29:08.164099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.492 [2024-07-22 20:29:08.349291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.493 [2024-07-22 20:29:08.349332] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.493 [2024-07-22 20:29:08.349345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.493 [2024-07-22 20:29:08.349355] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.493 [2024-07-22 20:29:08.349365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.493 [2024-07-22 20:29:08.349552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.493 [2024-07-22 20:29:08.349677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.493 [2024-07-22 20:29:08.349721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.493 [2024-07-22 20:29:08.349753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:57.062 [2024-07-22 20:29:08.825889] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:57.062 Malloc0 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:57.062 Malloc1 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:57.062 [2024-07-22 20:29:08.990776] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.062 20:29:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:57.062 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.062 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:21:57.322 00:21:57.322 Discovery Log Number of Records 2, Generation counter 2 00:21:57.322 =====Discovery Log Entry 0====== 00:21:57.322 trtype: tcp 00:21:57.322 adrfam: ipv4 00:21:57.322 subtype: current discovery subsystem 00:21:57.322 treq: not required 00:21:57.322 portid: 0 00:21:57.322 trsvcid: 4420 00:21:57.322 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:57.322 traddr: 10.0.0.2 00:21:57.322 eflags: explicit discovery connections, duplicate discovery information 00:21:57.322 sectype: none 00:21:57.322 =====Discovery Log Entry 1====== 00:21:57.322 trtype: tcp 00:21:57.322 adrfam: ipv4 00:21:57.322 subtype: nvme subsystem 00:21:57.322 treq: not required 00:21:57.322 portid: 0 00:21:57.322 trsvcid: 4420 00:21:57.322 subnqn: nqn.2016-06.io.spdk:cnode1 00:21:57.322 traddr: 10.0.0.2 00:21:57.322 eflags: none 00:21:57.322 sectype: none 00:21:57.322 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:21:57.322 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:21:57.322 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:57.322 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:57.322 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:57.322 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:57.322 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:57.322 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:57.322 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:57.322 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:21:57.322 20:29:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:58.706 20:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:58.706 20:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:21:58.706 20:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:58.706 20:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:21:58.706 20:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:21:58.706 20:29:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:22:01.247 /dev/nvme0n1 ]] 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:01.247 20:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:22:01.247 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:01.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:01.507 rmmod nvme_tcp 00:22:01.507 rmmod nvme_fabrics 00:22:01.507 rmmod nvme_keyring 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3630209 ']' 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3630209 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3630209 ']' 00:22:01.507 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3630209 00:22:01.767 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:22:01.767 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:01.767 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3630209 00:22:01.767 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:01.767 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:01.767 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3630209' 00:22:01.767 killing process with pid 3630209 00:22:01.767 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3630209 00:22:01.767 20:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3630209 00:22:02.708 20:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:02.708 20:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:02.708 20:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:02.708 20:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.708 20:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:02.708 20:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.708 20:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.708 20:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.258 00:22:05.258 real 0m15.983s 00:22:05.258 user 0m26.251s 00:22:05.258 sys 0m5.978s 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:05.258 ************************************ 00:22:05.258 END TEST nvmf_nvme_cli 00:22:05.258 ************************************ 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:05.258 ************************************ 00:22:05.258 START TEST nvmf_auth_target 00:22:05.258 ************************************ 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:05.258 * Looking for test storage... 00:22:05.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.258 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.259 20:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.947 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:11.948 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:11.948 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:11.948 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:11.948 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:22:11.948 00:22:11.948 --- 10.0.0.2 ping statistics --- 00:22:11.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.948 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:22:11.948 00:22:11.948 --- 10.0.0.1 ping statistics --- 00:22:11.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.948 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.948 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.210 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3635587 00:22:12.210 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3635587 00:22:12.210 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:12.210 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3635587 ']' 00:22:12.210 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.210 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.210 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.210 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.210 20:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3635937 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:13.149 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=96f3de916c3bc86c90cddc1b49282b7f9b6f10a05186ff57 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jT5 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 96f3de916c3bc86c90cddc1b49282b7f9b6f10a05186ff57 0 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 96f3de916c3bc86c90cddc1b49282b7f9b6f10a05186ff57 0 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=96f3de916c3bc86c90cddc1b49282b7f9b6f10a05186ff57 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jT5 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jT5 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.jT5 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0bb106cdaaca232513c3d7480d3ad358f7d61b198e0c209495b053cb9a06fd0b 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KWO 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0bb106cdaaca232513c3d7480d3ad358f7d61b198e0c209495b053cb9a06fd0b 3 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0bb106cdaaca232513c3d7480d3ad358f7d61b198e0c209495b053cb9a06fd0b 3 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0bb106cdaaca232513c3d7480d3ad358f7d61b198e0c209495b053cb9a06fd0b 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KWO 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KWO 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.KWO 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d19cbf32da56642c1dde021143eea3d2 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.vPl 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d19cbf32da56642c1dde021143eea3d2 1 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d19cbf32da56642c1dde021143eea3d2 1 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d19cbf32da56642c1dde021143eea3d2 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:13.150 20:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.vPl 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.vPl 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.vPl 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fb9d35ce407387e8031e680c58107b8522109c577af96c2c 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IEH 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fb9d35ce407387e8031e680c58107b8522109c577af96c2c 2 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fb9d35ce407387e8031e680c58107b8522109c577af96c2c 2 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fb9d35ce407387e8031e680c58107b8522109c577af96c2c 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IEH 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IEH 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.IEH 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=09f32d44139d9729641c206b8fbc01a36c9c0a1e007ef7e1 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bjR 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 09f32d44139d9729641c206b8fbc01a36c9c0a1e007ef7e1 2 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 09f32d44139d9729641c206b8fbc01a36c9c0a1e007ef7e1 2 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=09f32d44139d9729641c206b8fbc01a36c9c0a1e007ef7e1 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bjR 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bjR 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.bjR 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d30a34b5df0c7b2a024b894b166ce9f1 00:22:13.150 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.AlJ 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d30a34b5df0c7b2a024b894b166ce9f1 1 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d30a34b5df0c7b2a024b894b166ce9f1 1 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d30a34b5df0c7b2a024b894b166ce9f1 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.AlJ 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.AlJ 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.AlJ 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5eabc87447f41655cd8b7b8102674940f270199fa7e350105c5ad73f14190ccb 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eBX 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5eabc87447f41655cd8b7b8102674940f270199fa7e350105c5ad73f14190ccb 3 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5eabc87447f41655cd8b7b8102674940f270199fa7e350105c5ad73f14190ccb 3 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5eabc87447f41655cd8b7b8102674940f270199fa7e350105c5ad73f14190ccb 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eBX 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eBX 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.eBX 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3635587 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3635587 ']' 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.412 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.672 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.672 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:13.672 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3635937 /var/tmp/host.sock 00:22:13.672 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3635937 ']' 00:22:13.672 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:22:13.672 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.672 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:13.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:13.672 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.672 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jT5 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.jT5 00:22:13.933 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.jT5 00:22:14.194 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.KWO ]] 00:22:14.194 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KWO 00:22:14.194 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.194 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.194 20:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.194 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KWO 00:22:14.194 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KWO 00:22:14.194 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:14.194 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vPl 00:22:14.194 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.194 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.194 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.194 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vPl 00:22:14.194 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vPl 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.IEH ]] 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IEH 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IEH 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IEH 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bjR 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.bjR 00:22:14.454 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.bjR 00:22:14.714 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.AlJ ]] 00:22:14.714 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AlJ 00:22:14.714 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.714 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.714 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.714 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AlJ 00:22:14.714 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AlJ 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.eBX 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.eBX 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.eBX 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:14.973 20:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.234 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.495 00:22:15.495 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.495 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.495 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.495 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.495 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.495 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.495 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.495 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.495 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.495 { 00:22:15.495 "cntlid": 1, 00:22:15.495 "qid": 0, 00:22:15.495 "state": "enabled", 00:22:15.495 "thread": "nvmf_tgt_poll_group_000", 00:22:15.495 "listen_address": { 00:22:15.495 "trtype": "TCP", 00:22:15.495 "adrfam": "IPv4", 00:22:15.495 "traddr": "10.0.0.2", 00:22:15.495 "trsvcid": "4420" 00:22:15.495 }, 00:22:15.495 "peer_address": { 00:22:15.495 "trtype": "TCP", 00:22:15.495 "adrfam": "IPv4", 00:22:15.495 "traddr": "10.0.0.1", 00:22:15.495 "trsvcid": "38040" 00:22:15.495 }, 00:22:15.495 "auth": { 00:22:15.495 "state": "completed", 00:22:15.495 "digest": "sha256", 00:22:15.495 "dhgroup": "null" 00:22:15.495 } 00:22:15.495 } 00:22:15.495 ]' 00:22:15.495 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.756 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:15.756 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.756 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:15.756 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.756 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.756 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.756 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.016 20:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:22:16.588 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.588 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.588 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.588 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.588 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.588 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.588 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:16.588 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.849 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.110 00:22:17.110 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.110 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.110 20:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.371 { 00:22:17.371 "cntlid": 3, 00:22:17.371 "qid": 0, 00:22:17.371 "state": "enabled", 00:22:17.371 "thread": "nvmf_tgt_poll_group_000", 00:22:17.371 "listen_address": { 00:22:17.371 "trtype": "TCP", 00:22:17.371 "adrfam": "IPv4", 00:22:17.371 "traddr": "10.0.0.2", 00:22:17.371 "trsvcid": "4420" 00:22:17.371 }, 00:22:17.371 "peer_address": { 00:22:17.371 "trtype": "TCP", 00:22:17.371 "adrfam": "IPv4", 00:22:17.371 "traddr": "10.0.0.1", 00:22:17.371 "trsvcid": "38070" 00:22:17.371 }, 00:22:17.371 "auth": { 00:22:17.371 "state": "completed", 00:22:17.371 "digest": "sha256", 00:22:17.371 "dhgroup": "null" 00:22:17.371 } 00:22:17.371 } 00:22:17.371 ]' 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.371 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.632 20:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:22:18.203 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.465 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.726 00:22:18.726 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.726 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.726 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.986 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.986 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.986 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.986 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.986 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.986 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.986 { 00:22:18.986 "cntlid": 5, 00:22:18.986 "qid": 0, 00:22:18.986 "state": "enabled", 00:22:18.986 "thread": "nvmf_tgt_poll_group_000", 00:22:18.986 "listen_address": { 00:22:18.986 "trtype": "TCP", 00:22:18.986 "adrfam": "IPv4", 00:22:18.986 "traddr": "10.0.0.2", 00:22:18.986 "trsvcid": "4420" 00:22:18.986 }, 00:22:18.986 "peer_address": { 00:22:18.986 "trtype": "TCP", 00:22:18.986 "adrfam": "IPv4", 00:22:18.986 "traddr": "10.0.0.1", 00:22:18.986 "trsvcid": "38096" 00:22:18.986 }, 00:22:18.986 "auth": { 00:22:18.986 "state": "completed", 00:22:18.986 "digest": "sha256", 00:22:18.986 "dhgroup": "null" 00:22:18.986 } 00:22:18.986 } 00:22:18.986 ]' 00:22:18.986 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.986 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:18.986 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.986 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:18.987 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.987 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.987 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.987 20:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.247 20:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:22:20.186 20:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.186 20:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.186 20:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.186 20:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.186 20:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.186 20:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.186 20:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:20.186 20:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.186 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.446 00:22:20.446 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.446 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.446 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.446 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.446 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.446 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.446 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.446 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.446 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.446 { 00:22:20.446 "cntlid": 7, 00:22:20.446 "qid": 0, 00:22:20.446 "state": "enabled", 00:22:20.446 "thread": "nvmf_tgt_poll_group_000", 00:22:20.446 "listen_address": { 00:22:20.446 "trtype": "TCP", 00:22:20.446 "adrfam": "IPv4", 00:22:20.446 "traddr": "10.0.0.2", 00:22:20.446 "trsvcid": "4420" 00:22:20.446 }, 00:22:20.446 "peer_address": { 00:22:20.446 "trtype": "TCP", 00:22:20.446 "adrfam": "IPv4", 00:22:20.446 "traddr": "10.0.0.1", 00:22:20.446 "trsvcid": "38128" 00:22:20.446 }, 00:22:20.446 "auth": { 00:22:20.446 "state": "completed", 00:22:20.446 "digest": "sha256", 00:22:20.446 "dhgroup": "null" 00:22:20.446 } 00:22:20.446 } 00:22:20.446 ]' 00:22:20.446 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.708 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:20.708 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.708 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:20.708 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.708 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.708 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.708 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.968 20:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:22:21.540 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.540 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:21.540 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.540 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.540 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.540 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.540 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.540 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:21.540 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.800 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.060 00:22:22.060 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.060 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.060 20:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:22.320 { 00:22:22.320 "cntlid": 9, 00:22:22.320 "qid": 0, 00:22:22.320 "state": "enabled", 00:22:22.320 "thread": "nvmf_tgt_poll_group_000", 00:22:22.320 "listen_address": { 00:22:22.320 "trtype": "TCP", 00:22:22.320 "adrfam": "IPv4", 00:22:22.320 "traddr": "10.0.0.2", 00:22:22.320 "trsvcid": "4420" 00:22:22.320 }, 00:22:22.320 "peer_address": { 00:22:22.320 "trtype": "TCP", 00:22:22.320 "adrfam": "IPv4", 00:22:22.320 "traddr": "10.0.0.1", 00:22:22.320 "trsvcid": "44724" 00:22:22.320 }, 00:22:22.320 "auth": { 00:22:22.320 "state": "completed", 00:22:22.320 "digest": "sha256", 00:22:22.320 "dhgroup": "ffdhe2048" 00:22:22.320 } 00:22:22.320 } 00:22:22.320 ]' 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.320 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.580 20:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:22:23.150 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.150 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:23.150 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.150 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.410 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.671 00:22:23.671 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.671 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.671 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.930 { 00:22:23.930 "cntlid": 11, 00:22:23.930 "qid": 0, 00:22:23.930 "state": "enabled", 00:22:23.930 "thread": "nvmf_tgt_poll_group_000", 00:22:23.930 "listen_address": { 00:22:23.930 "trtype": "TCP", 00:22:23.930 "adrfam": "IPv4", 00:22:23.930 "traddr": "10.0.0.2", 00:22:23.930 "trsvcid": "4420" 00:22:23.930 }, 00:22:23.930 "peer_address": { 00:22:23.930 "trtype": "TCP", 00:22:23.930 "adrfam": "IPv4", 00:22:23.930 "traddr": "10.0.0.1", 00:22:23.930 "trsvcid": "44748" 00:22:23.930 }, 00:22:23.930 "auth": { 00:22:23.930 "state": "completed", 00:22:23.930 "digest": "sha256", 00:22:23.930 "dhgroup": "ffdhe2048" 00:22:23.930 } 00:22:23.930 } 00:22:23.930 ]' 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.930 20:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.190 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:22:24.760 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.020 20:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.281 00:22:25.281 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.281 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.281 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.541 { 00:22:25.541 "cntlid": 13, 00:22:25.541 "qid": 0, 00:22:25.541 "state": "enabled", 00:22:25.541 "thread": "nvmf_tgt_poll_group_000", 00:22:25.541 "listen_address": { 00:22:25.541 "trtype": "TCP", 00:22:25.541 "adrfam": "IPv4", 00:22:25.541 "traddr": "10.0.0.2", 00:22:25.541 "trsvcid": "4420" 00:22:25.541 }, 00:22:25.541 "peer_address": { 00:22:25.541 "trtype": "TCP", 00:22:25.541 "adrfam": "IPv4", 00:22:25.541 "traddr": "10.0.0.1", 00:22:25.541 "trsvcid": "44766" 00:22:25.541 }, 00:22:25.541 "auth": { 00:22:25.541 "state": "completed", 00:22:25.541 "digest": "sha256", 00:22:25.541 "dhgroup": "ffdhe2048" 00:22:25.541 } 00:22:25.541 } 00:22:25.541 ]' 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.541 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.800 20:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.739 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.999 00:22:26.999 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.999 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.999 20:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.999 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.999 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.259 { 00:22:27.259 "cntlid": 15, 00:22:27.259 "qid": 0, 00:22:27.259 "state": "enabled", 00:22:27.259 "thread": "nvmf_tgt_poll_group_000", 00:22:27.259 "listen_address": { 00:22:27.259 "trtype": "TCP", 00:22:27.259 "adrfam": "IPv4", 00:22:27.259 "traddr": "10.0.0.2", 00:22:27.259 "trsvcid": "4420" 00:22:27.259 }, 00:22:27.259 "peer_address": { 00:22:27.259 "trtype": "TCP", 00:22:27.259 "adrfam": "IPv4", 00:22:27.259 "traddr": "10.0.0.1", 00:22:27.259 "trsvcid": "44798" 00:22:27.259 }, 00:22:27.259 "auth": { 00:22:27.259 "state": "completed", 00:22:27.259 "digest": "sha256", 00:22:27.259 "dhgroup": "ffdhe2048" 00:22:27.259 } 00:22:27.259 } 00:22:27.259 ]' 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.259 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.519 20:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:22:28.091 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.091 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.091 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.091 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.091 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.091 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.091 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:28.091 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:28.091 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.389 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.687 00:22:28.687 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.687 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.687 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.687 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.687 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.687 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.687 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.687 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.687 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.687 { 00:22:28.687 "cntlid": 17, 00:22:28.687 "qid": 0, 00:22:28.687 "state": "enabled", 00:22:28.687 "thread": "nvmf_tgt_poll_group_000", 00:22:28.687 "listen_address": { 00:22:28.687 "trtype": "TCP", 00:22:28.687 "adrfam": "IPv4", 00:22:28.687 "traddr": "10.0.0.2", 00:22:28.687 "trsvcid": "4420" 00:22:28.687 }, 00:22:28.687 "peer_address": { 00:22:28.687 "trtype": "TCP", 00:22:28.687 "adrfam": "IPv4", 00:22:28.687 "traddr": "10.0.0.1", 00:22:28.687 "trsvcid": "44830" 00:22:28.687 }, 00:22:28.687 "auth": { 00:22:28.687 "state": "completed", 00:22:28.687 "digest": "sha256", 00:22:28.687 "dhgroup": "ffdhe3072" 00:22:28.687 } 00:22:28.687 } 00:22:28.687 ]' 00:22:28.687 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.949 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:28.949 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.949 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:28.949 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.949 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.949 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.949 20:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.209 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:22:29.779 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.779 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:29.779 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.779 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.779 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.779 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.779 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:29.779 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:30.040 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:22:30.040 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:30.040 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:30.040 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:30.040 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:30.040 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.040 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.040 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.040 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.040 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.040 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.041 20:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.302 00:22:30.302 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.302 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.302 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.567 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.567 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.567 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.567 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.567 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.567 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.567 { 00:22:30.567 "cntlid": 19, 00:22:30.567 "qid": 0, 00:22:30.567 "state": "enabled", 00:22:30.567 "thread": "nvmf_tgt_poll_group_000", 00:22:30.567 "listen_address": { 00:22:30.567 "trtype": "TCP", 00:22:30.567 "adrfam": "IPv4", 00:22:30.567 "traddr": "10.0.0.2", 00:22:30.567 "trsvcid": "4420" 00:22:30.567 }, 00:22:30.567 "peer_address": { 00:22:30.567 "trtype": "TCP", 00:22:30.567 "adrfam": "IPv4", 00:22:30.567 "traddr": "10.0.0.1", 00:22:30.567 "trsvcid": "44860" 00:22:30.567 }, 00:22:30.567 "auth": { 00:22:30.567 "state": "completed", 00:22:30.567 "digest": "sha256", 00:22:30.567 "dhgroup": "ffdhe3072" 00:22:30.567 } 00:22:30.567 } 00:22:30.567 ]' 00:22:30.567 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.567 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:30.567 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.568 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:30.568 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.568 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.568 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.568 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.828 20:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:22:31.768 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.769 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.030 00:22:32.030 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.030 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.030 20:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.030 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.030 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.030 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.030 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.030 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.030 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.030 { 00:22:32.030 "cntlid": 21, 00:22:32.030 "qid": 0, 00:22:32.030 "state": "enabled", 00:22:32.030 "thread": "nvmf_tgt_poll_group_000", 00:22:32.030 "listen_address": { 00:22:32.030 "trtype": "TCP", 00:22:32.030 "adrfam": "IPv4", 00:22:32.030 "traddr": "10.0.0.2", 00:22:32.030 "trsvcid": "4420" 00:22:32.030 }, 00:22:32.030 "peer_address": { 00:22:32.030 "trtype": "TCP", 00:22:32.030 "adrfam": "IPv4", 00:22:32.030 "traddr": "10.0.0.1", 00:22:32.030 "trsvcid": "44894" 00:22:32.030 }, 00:22:32.030 "auth": { 00:22:32.030 "state": "completed", 00:22:32.030 "digest": "sha256", 00:22:32.030 "dhgroup": "ffdhe3072" 00:22:32.030 } 00:22:32.030 } 00:22:32.030 ]' 00:22:32.030 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.291 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:32.291 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.291 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:32.291 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.291 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.291 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.291 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.552 20:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:22:33.123 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.123 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.123 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.123 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.123 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.123 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:33.123 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:33.123 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:33.384 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:22:33.385 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:33.385 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:33.385 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:33.385 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:33.385 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.385 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:33.385 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.385 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.385 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.385 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.385 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.645 00:22:33.645 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:33.645 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.645 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.906 { 00:22:33.906 "cntlid": 23, 00:22:33.906 "qid": 0, 00:22:33.906 "state": "enabled", 00:22:33.906 "thread": "nvmf_tgt_poll_group_000", 00:22:33.906 "listen_address": { 00:22:33.906 "trtype": "TCP", 00:22:33.906 "adrfam": "IPv4", 00:22:33.906 "traddr": "10.0.0.2", 00:22:33.906 "trsvcid": "4420" 00:22:33.906 }, 00:22:33.906 "peer_address": { 00:22:33.906 "trtype": "TCP", 00:22:33.906 "adrfam": "IPv4", 00:22:33.906 "traddr": "10.0.0.1", 00:22:33.906 "trsvcid": "35074" 00:22:33.906 }, 00:22:33.906 "auth": { 00:22:33.906 "state": "completed", 00:22:33.906 "digest": "sha256", 00:22:33.906 "dhgroup": "ffdhe3072" 00:22:33.906 } 00:22:33.906 } 00:22:33.906 ]' 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.906 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.167 20:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:22:34.738 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.738 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:34.738 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.738 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.738 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.738 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.738 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:34.738 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:34.738 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:34.998 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:22:34.998 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.998 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:34.999 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:34.999 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:34.999 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.999 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.999 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.999 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.999 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.999 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.999 20:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.259 00:22:35.259 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.259 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.259 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.520 { 00:22:35.520 "cntlid": 25, 00:22:35.520 "qid": 0, 00:22:35.520 "state": "enabled", 00:22:35.520 "thread": "nvmf_tgt_poll_group_000", 00:22:35.520 "listen_address": { 00:22:35.520 "trtype": "TCP", 00:22:35.520 "adrfam": "IPv4", 00:22:35.520 "traddr": "10.0.0.2", 00:22:35.520 "trsvcid": "4420" 00:22:35.520 }, 00:22:35.520 "peer_address": { 00:22:35.520 "trtype": "TCP", 00:22:35.520 "adrfam": "IPv4", 00:22:35.520 "traddr": "10.0.0.1", 00:22:35.520 "trsvcid": "35104" 00:22:35.520 }, 00:22:35.520 "auth": { 00:22:35.520 "state": "completed", 00:22:35.520 "digest": "sha256", 00:22:35.520 "dhgroup": "ffdhe4096" 00:22:35.520 } 00:22:35.520 } 00:22:35.520 ]' 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.520 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.781 20:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.723 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.984 00:22:36.984 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.984 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.984 20:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:37.244 { 00:22:37.244 "cntlid": 27, 00:22:37.244 "qid": 0, 00:22:37.244 "state": "enabled", 00:22:37.244 "thread": "nvmf_tgt_poll_group_000", 00:22:37.244 "listen_address": { 00:22:37.244 "trtype": "TCP", 00:22:37.244 "adrfam": "IPv4", 00:22:37.244 "traddr": "10.0.0.2", 00:22:37.244 "trsvcid": "4420" 00:22:37.244 }, 00:22:37.244 "peer_address": { 00:22:37.244 "trtype": "TCP", 00:22:37.244 "adrfam": "IPv4", 00:22:37.244 "traddr": "10.0.0.1", 00:22:37.244 "trsvcid": "35122" 00:22:37.244 }, 00:22:37.244 "auth": { 00:22:37.244 "state": "completed", 00:22:37.244 "digest": "sha256", 00:22:37.244 "dhgroup": "ffdhe4096" 00:22:37.244 } 00:22:37.244 } 00:22:37.244 ]' 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.244 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.505 20:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.446 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.447 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.447 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.447 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.447 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.447 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.707 00:22:38.707 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.707 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:38.707 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:38.968 { 00:22:38.968 "cntlid": 29, 00:22:38.968 "qid": 0, 00:22:38.968 "state": "enabled", 00:22:38.968 "thread": "nvmf_tgt_poll_group_000", 00:22:38.968 "listen_address": { 00:22:38.968 "trtype": "TCP", 00:22:38.968 "adrfam": "IPv4", 00:22:38.968 "traddr": "10.0.0.2", 00:22:38.968 "trsvcid": "4420" 00:22:38.968 }, 00:22:38.968 "peer_address": { 00:22:38.968 "trtype": "TCP", 00:22:38.968 "adrfam": "IPv4", 00:22:38.968 "traddr": "10.0.0.1", 00:22:38.968 "trsvcid": "35150" 00:22:38.968 }, 00:22:38.968 "auth": { 00:22:38.968 "state": "completed", 00:22:38.968 "digest": "sha256", 00:22:38.968 "dhgroup": "ffdhe4096" 00:22:38.968 } 00:22:38.968 } 00:22:38.968 ]' 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.968 20:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.228 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:22:39.799 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.799 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.799 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.799 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.799 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.799 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:39.799 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:39.800 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.060 20:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.321 00:22:40.321 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:40.321 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:40.321 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:40.582 { 00:22:40.582 "cntlid": 31, 00:22:40.582 "qid": 0, 00:22:40.582 "state": "enabled", 00:22:40.582 "thread": "nvmf_tgt_poll_group_000", 00:22:40.582 "listen_address": { 00:22:40.582 "trtype": "TCP", 00:22:40.582 "adrfam": "IPv4", 00:22:40.582 "traddr": "10.0.0.2", 00:22:40.582 "trsvcid": "4420" 00:22:40.582 }, 00:22:40.582 "peer_address": { 00:22:40.582 "trtype": "TCP", 00:22:40.582 "adrfam": "IPv4", 00:22:40.582 "traddr": "10.0.0.1", 00:22:40.582 "trsvcid": "35186" 00:22:40.582 }, 00:22:40.582 "auth": { 00:22:40.582 "state": "completed", 00:22:40.582 "digest": "sha256", 00:22:40.582 "dhgroup": "ffdhe4096" 00:22:40.582 } 00:22:40.582 } 00:22:40.582 ]' 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.582 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.843 20:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:41.785 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:41.786 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.786 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.786 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.786 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.786 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.786 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.786 20:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.047 00:22:42.047 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:42.047 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:42.047 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.307 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.307 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.307 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.307 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.307 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.307 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:42.307 { 00:22:42.307 "cntlid": 33, 00:22:42.307 "qid": 0, 00:22:42.307 "state": "enabled", 00:22:42.307 "thread": "nvmf_tgt_poll_group_000", 00:22:42.307 "listen_address": { 00:22:42.307 "trtype": "TCP", 00:22:42.307 "adrfam": "IPv4", 00:22:42.307 "traddr": "10.0.0.2", 00:22:42.307 "trsvcid": "4420" 00:22:42.307 }, 00:22:42.307 "peer_address": { 00:22:42.307 "trtype": "TCP", 00:22:42.307 "adrfam": "IPv4", 00:22:42.307 "traddr": "10.0.0.1", 00:22:42.307 "trsvcid": "42072" 00:22:42.307 }, 00:22:42.308 "auth": { 00:22:42.308 "state": "completed", 00:22:42.308 "digest": "sha256", 00:22:42.308 "dhgroup": "ffdhe6144" 00:22:42.308 } 00:22:42.308 } 00:22:42.308 ]' 00:22:42.308 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:42.308 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:42.308 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:42.308 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:42.308 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:42.308 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.308 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.308 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.568 20:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:22:43.139 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.139 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.140 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.140 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.140 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.140 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:43.140 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:43.140 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.400 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.661 00:22:43.922 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.922 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.922 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.922 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.922 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.922 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.922 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.922 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.922 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.922 { 00:22:43.922 "cntlid": 35, 00:22:43.922 "qid": 0, 00:22:43.922 "state": "enabled", 00:22:43.922 "thread": "nvmf_tgt_poll_group_000", 00:22:43.922 "listen_address": { 00:22:43.922 "trtype": "TCP", 00:22:43.922 "adrfam": "IPv4", 00:22:43.922 "traddr": "10.0.0.2", 00:22:43.922 "trsvcid": "4420" 00:22:43.922 }, 00:22:43.922 "peer_address": { 00:22:43.922 "trtype": "TCP", 00:22:43.922 "adrfam": "IPv4", 00:22:43.922 "traddr": "10.0.0.1", 00:22:43.922 "trsvcid": "42096" 00:22:43.922 }, 00:22:43.923 "auth": { 00:22:43.923 "state": "completed", 00:22:43.923 "digest": "sha256", 00:22:43.923 "dhgroup": "ffdhe6144" 00:22:43.923 } 00:22:43.923 } 00:22:43.923 ]' 00:22:43.923 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.923 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:43.923 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:44.183 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:44.183 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:44.183 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.183 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.183 20:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.183 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:22:45.127 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.127 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:45.127 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.127 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.127 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.127 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:45.127 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:45.127 20:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.127 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.388 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.649 { 00:22:45.649 "cntlid": 37, 00:22:45.649 "qid": 0, 00:22:45.649 "state": "enabled", 00:22:45.649 "thread": "nvmf_tgt_poll_group_000", 00:22:45.649 "listen_address": { 00:22:45.649 "trtype": "TCP", 00:22:45.649 "adrfam": "IPv4", 00:22:45.649 "traddr": "10.0.0.2", 00:22:45.649 "trsvcid": "4420" 00:22:45.649 }, 00:22:45.649 "peer_address": { 00:22:45.649 "trtype": "TCP", 00:22:45.649 "adrfam": "IPv4", 00:22:45.649 "traddr": "10.0.0.1", 00:22:45.649 "trsvcid": "42132" 00:22:45.649 }, 00:22:45.649 "auth": { 00:22:45.649 "state": "completed", 00:22:45.649 "digest": "sha256", 00:22:45.649 "dhgroup": "ffdhe6144" 00:22:45.649 } 00:22:45.649 } 00:22:45.649 ]' 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:45.649 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.911 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.911 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.911 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.911 20:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:46.852 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.853 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:46.853 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.853 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.853 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.853 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:46.853 20:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.114 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:47.375 { 00:22:47.375 "cntlid": 39, 00:22:47.375 "qid": 0, 00:22:47.375 "state": "enabled", 00:22:47.375 "thread": "nvmf_tgt_poll_group_000", 00:22:47.375 "listen_address": { 00:22:47.375 "trtype": "TCP", 00:22:47.375 "adrfam": "IPv4", 00:22:47.375 "traddr": "10.0.0.2", 00:22:47.375 "trsvcid": "4420" 00:22:47.375 }, 00:22:47.375 "peer_address": { 00:22:47.375 "trtype": "TCP", 00:22:47.375 "adrfam": "IPv4", 00:22:47.375 "traddr": "10.0.0.1", 00:22:47.375 "trsvcid": "42156" 00:22:47.375 }, 00:22:47.375 "auth": { 00:22:47.375 "state": "completed", 00:22:47.375 "digest": "sha256", 00:22:47.375 "dhgroup": "ffdhe6144" 00:22:47.375 } 00:22:47.375 } 00:22:47.375 ]' 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:47.375 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:47.636 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:47.636 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:47.636 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.636 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.636 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.636 20:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.639 20:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.209 00:22:49.209 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:49.209 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:49.209 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:49.470 { 00:22:49.470 "cntlid": 41, 00:22:49.470 "qid": 0, 00:22:49.470 "state": "enabled", 00:22:49.470 "thread": "nvmf_tgt_poll_group_000", 00:22:49.470 "listen_address": { 00:22:49.470 "trtype": "TCP", 00:22:49.470 "adrfam": "IPv4", 00:22:49.470 "traddr": "10.0.0.2", 00:22:49.470 "trsvcid": "4420" 00:22:49.470 }, 00:22:49.470 "peer_address": { 00:22:49.470 "trtype": "TCP", 00:22:49.470 "adrfam": "IPv4", 00:22:49.470 "traddr": "10.0.0.1", 00:22:49.470 "trsvcid": "42184" 00:22:49.470 }, 00:22:49.470 "auth": { 00:22:49.470 "state": "completed", 00:22:49.470 "digest": "sha256", 00:22:49.470 "dhgroup": "ffdhe8192" 00:22:49.470 } 00:22:49.470 } 00:22:49.470 ]' 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.470 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.731 20:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:22:50.302 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.302 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.302 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.302 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.302 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.302 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:50.302 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:50.302 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.563 20:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.132 00:22:51.132 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:51.132 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:51.133 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.392 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:51.393 { 00:22:51.393 "cntlid": 43, 00:22:51.393 "qid": 0, 00:22:51.393 "state": "enabled", 00:22:51.393 "thread": "nvmf_tgt_poll_group_000", 00:22:51.393 "listen_address": { 00:22:51.393 "trtype": "TCP", 00:22:51.393 "adrfam": "IPv4", 00:22:51.393 "traddr": "10.0.0.2", 00:22:51.393 "trsvcid": "4420" 00:22:51.393 }, 00:22:51.393 "peer_address": { 00:22:51.393 "trtype": "TCP", 00:22:51.393 "adrfam": "IPv4", 00:22:51.393 "traddr": "10.0.0.1", 00:22:51.393 "trsvcid": "42208" 00:22:51.393 }, 00:22:51.393 "auth": { 00:22:51.393 "state": "completed", 00:22:51.393 "digest": "sha256", 00:22:51.393 "dhgroup": "ffdhe8192" 00:22:51.393 } 00:22:51.393 } 00:22:51.393 ]' 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.393 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.653 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:22:52.224 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.485 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.056 00:22:53.056 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:53.056 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:53.056 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:53.317 { 00:22:53.317 "cntlid": 45, 00:22:53.317 "qid": 0, 00:22:53.317 "state": "enabled", 00:22:53.317 "thread": "nvmf_tgt_poll_group_000", 00:22:53.317 "listen_address": { 00:22:53.317 "trtype": "TCP", 00:22:53.317 "adrfam": "IPv4", 00:22:53.317 "traddr": "10.0.0.2", 00:22:53.317 "trsvcid": "4420" 00:22:53.317 }, 00:22:53.317 "peer_address": { 00:22:53.317 "trtype": "TCP", 00:22:53.317 "adrfam": "IPv4", 00:22:53.317 "traddr": "10.0.0.1", 00:22:53.317 "trsvcid": "55406" 00:22:53.317 }, 00:22:53.317 "auth": { 00:22:53.317 "state": "completed", 00:22:53.317 "digest": "sha256", 00:22:53.317 "dhgroup": "ffdhe8192" 00:22:53.317 } 00:22:53.317 } 00:22:53.317 ]' 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.317 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.577 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:54.519 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:55.090 00:22:55.090 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.090 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.090 20:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.090 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.090 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.090 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.090 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.090 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.090 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:55.090 { 00:22:55.090 "cntlid": 47, 00:22:55.090 "qid": 0, 00:22:55.090 "state": "enabled", 00:22:55.090 "thread": "nvmf_tgt_poll_group_000", 00:22:55.090 "listen_address": { 00:22:55.090 "trtype": "TCP", 00:22:55.090 "adrfam": "IPv4", 00:22:55.090 "traddr": "10.0.0.2", 00:22:55.090 "trsvcid": "4420" 00:22:55.090 }, 00:22:55.090 "peer_address": { 00:22:55.090 "trtype": "TCP", 00:22:55.090 "adrfam": "IPv4", 00:22:55.090 "traddr": "10.0.0.1", 00:22:55.090 "trsvcid": "55422" 00:22:55.090 }, 00:22:55.090 "auth": { 00:22:55.090 "state": "completed", 00:22:55.090 "digest": "sha256", 00:22:55.090 "dhgroup": "ffdhe8192" 00:22:55.090 } 00:22:55.090 } 00:22:55.090 ]' 00:22:55.090 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:55.090 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:55.350 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:55.350 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:55.350 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:55.350 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.350 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.350 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.350 20:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.288 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.547 00:22:56.547 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:56.547 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:56.547 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:56.808 { 00:22:56.808 "cntlid": 49, 00:22:56.808 "qid": 0, 00:22:56.808 "state": "enabled", 00:22:56.808 "thread": "nvmf_tgt_poll_group_000", 00:22:56.808 "listen_address": { 00:22:56.808 "trtype": "TCP", 00:22:56.808 "adrfam": "IPv4", 00:22:56.808 "traddr": "10.0.0.2", 00:22:56.808 "trsvcid": "4420" 00:22:56.808 }, 00:22:56.808 "peer_address": { 00:22:56.808 "trtype": "TCP", 00:22:56.808 "adrfam": "IPv4", 00:22:56.808 "traddr": "10.0.0.1", 00:22:56.808 "trsvcid": "55450" 00:22:56.808 }, 00:22:56.808 "auth": { 00:22:56.808 "state": "completed", 00:22:56.808 "digest": "sha384", 00:22:56.808 "dhgroup": "null" 00:22:56.808 } 00:22:56.808 } 00:22:56.808 ]' 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.808 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.068 20:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.009 20:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.268 00:22:58.268 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:58.268 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:58.268 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.268 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.268 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.268 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.268 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.528 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.528 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:58.528 { 00:22:58.528 "cntlid": 51, 00:22:58.528 "qid": 0, 00:22:58.528 "state": "enabled", 00:22:58.528 "thread": "nvmf_tgt_poll_group_000", 00:22:58.528 "listen_address": { 00:22:58.528 "trtype": "TCP", 00:22:58.528 "adrfam": "IPv4", 00:22:58.528 "traddr": "10.0.0.2", 00:22:58.528 "trsvcid": "4420" 00:22:58.528 }, 00:22:58.528 "peer_address": { 00:22:58.528 "trtype": "TCP", 00:22:58.528 "adrfam": "IPv4", 00:22:58.528 "traddr": "10.0.0.1", 00:22:58.528 "trsvcid": "55472" 00:22:58.528 }, 00:22:58.528 "auth": { 00:22:58.528 "state": "completed", 00:22:58.528 "digest": "sha384", 00:22:58.528 "dhgroup": "null" 00:22:58.528 } 00:22:58.528 } 00:22:58.528 ]' 00:22:58.528 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:58.528 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:58.528 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:58.528 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:58.528 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:58.528 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.528 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.528 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.788 20:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:22:59.358 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.358 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:59.358 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.358 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.358 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.359 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:59.359 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:59.359 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.619 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.879 00:22:59.879 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:59.879 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:59.879 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.139 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.139 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.139 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.139 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.139 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.139 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:00.139 { 00:23:00.139 "cntlid": 53, 00:23:00.139 "qid": 0, 00:23:00.139 "state": "enabled", 00:23:00.139 "thread": "nvmf_tgt_poll_group_000", 00:23:00.139 "listen_address": { 00:23:00.139 "trtype": "TCP", 00:23:00.139 "adrfam": "IPv4", 00:23:00.139 "traddr": "10.0.0.2", 00:23:00.139 "trsvcid": "4420" 00:23:00.139 }, 00:23:00.139 "peer_address": { 00:23:00.139 "trtype": "TCP", 00:23:00.139 "adrfam": "IPv4", 00:23:00.139 "traddr": "10.0.0.1", 00:23:00.139 "trsvcid": "55516" 00:23:00.139 }, 00:23:00.139 "auth": { 00:23:00.139 "state": "completed", 00:23:00.139 "digest": "sha384", 00:23:00.139 "dhgroup": "null" 00:23:00.139 } 00:23:00.139 } 00:23:00.139 ]' 00:23:00.139 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:00.139 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:00.139 20:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:00.139 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:00.139 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:00.139 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.139 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.139 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.400 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:23:00.970 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.970 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.970 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.970 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.970 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.970 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.970 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:00.970 20:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:01.230 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:01.490 00:23:01.490 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:01.490 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.490 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:01.750 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.750 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.750 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.750 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.750 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.750 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:01.750 { 00:23:01.750 "cntlid": 55, 00:23:01.750 "qid": 0, 00:23:01.750 "state": "enabled", 00:23:01.750 "thread": "nvmf_tgt_poll_group_000", 00:23:01.750 "listen_address": { 00:23:01.750 "trtype": "TCP", 00:23:01.750 "adrfam": "IPv4", 00:23:01.750 "traddr": "10.0.0.2", 00:23:01.750 "trsvcid": "4420" 00:23:01.750 }, 00:23:01.751 "peer_address": { 00:23:01.751 "trtype": "TCP", 00:23:01.751 "adrfam": "IPv4", 00:23:01.751 "traddr": "10.0.0.1", 00:23:01.751 "trsvcid": "55538" 00:23:01.751 }, 00:23:01.751 "auth": { 00:23:01.751 "state": "completed", 00:23:01.751 "digest": "sha384", 00:23:01.751 "dhgroup": "null" 00:23:01.751 } 00:23:01.751 } 00:23:01.751 ]' 00:23:01.751 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:01.751 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:01.751 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.751 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:01.751 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.751 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.751 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.751 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.010 20:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:23:02.580 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.841 20:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.102 00:23:03.102 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:03.102 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:03.102 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:03.363 { 00:23:03.363 "cntlid": 57, 00:23:03.363 "qid": 0, 00:23:03.363 "state": "enabled", 00:23:03.363 "thread": "nvmf_tgt_poll_group_000", 00:23:03.363 "listen_address": { 00:23:03.363 "trtype": "TCP", 00:23:03.363 "adrfam": "IPv4", 00:23:03.363 "traddr": "10.0.0.2", 00:23:03.363 "trsvcid": "4420" 00:23:03.363 }, 00:23:03.363 "peer_address": { 00:23:03.363 "trtype": "TCP", 00:23:03.363 "adrfam": "IPv4", 00:23:03.363 "traddr": "10.0.0.1", 00:23:03.363 "trsvcid": "38246" 00:23:03.363 }, 00:23:03.363 "auth": { 00:23:03.363 "state": "completed", 00:23:03.363 "digest": "sha384", 00:23:03.363 "dhgroup": "ffdhe2048" 00:23:03.363 } 00:23:03.363 } 00:23:03.363 ]' 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.363 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.624 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.565 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.826 00:23:04.826 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:04.826 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:04.826 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.826 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.826 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.826 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.826 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.088 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.088 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:05.088 { 00:23:05.088 "cntlid": 59, 00:23:05.088 "qid": 0, 00:23:05.088 "state": "enabled", 00:23:05.088 "thread": "nvmf_tgt_poll_group_000", 00:23:05.088 "listen_address": { 00:23:05.088 "trtype": "TCP", 00:23:05.088 "adrfam": "IPv4", 00:23:05.088 "traddr": "10.0.0.2", 00:23:05.088 "trsvcid": "4420" 00:23:05.088 }, 00:23:05.088 "peer_address": { 00:23:05.088 "trtype": "TCP", 00:23:05.088 "adrfam": "IPv4", 00:23:05.088 "traddr": "10.0.0.1", 00:23:05.088 "trsvcid": "38266" 00:23:05.088 }, 00:23:05.088 "auth": { 00:23:05.088 "state": "completed", 00:23:05.088 "digest": "sha384", 00:23:05.088 "dhgroup": "ffdhe2048" 00:23:05.088 } 00:23:05.088 } 00:23:05.088 ]' 00:23:05.088 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:05.088 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:05.088 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:05.088 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:05.088 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:05.088 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.088 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.088 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.348 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:23:05.920 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.920 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.920 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.920 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.920 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.920 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:05.920 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:05.920 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:06.180 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:23:06.180 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:06.180 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:06.180 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:06.180 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:06.180 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.180 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.180 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.180 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.180 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.180 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.181 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.441 00:23:06.441 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:06.441 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:06.441 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:06.702 { 00:23:06.702 "cntlid": 61, 00:23:06.702 "qid": 0, 00:23:06.702 "state": "enabled", 00:23:06.702 "thread": "nvmf_tgt_poll_group_000", 00:23:06.702 "listen_address": { 00:23:06.702 "trtype": "TCP", 00:23:06.702 "adrfam": "IPv4", 00:23:06.702 "traddr": "10.0.0.2", 00:23:06.702 "trsvcid": "4420" 00:23:06.702 }, 00:23:06.702 "peer_address": { 00:23:06.702 "trtype": "TCP", 00:23:06.702 "adrfam": "IPv4", 00:23:06.702 "traddr": "10.0.0.1", 00:23:06.702 "trsvcid": "38312" 00:23:06.702 }, 00:23:06.702 "auth": { 00:23:06.702 "state": "completed", 00:23:06.702 "digest": "sha384", 00:23:06.702 "dhgroup": "ffdhe2048" 00:23:06.702 } 00:23:06.702 } 00:23:06.702 ]' 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.702 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.963 20:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:23:07.534 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.534 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.534 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.534 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:07.834 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:08.133 00:23:08.133 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:08.133 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.133 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:08.133 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.133 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.133 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.133 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.133 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.133 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:08.133 { 00:23:08.133 "cntlid": 63, 00:23:08.133 "qid": 0, 00:23:08.133 "state": "enabled", 00:23:08.133 "thread": "nvmf_tgt_poll_group_000", 00:23:08.133 "listen_address": { 00:23:08.133 "trtype": "TCP", 00:23:08.133 "adrfam": "IPv4", 00:23:08.133 "traddr": "10.0.0.2", 00:23:08.133 "trsvcid": "4420" 00:23:08.133 }, 00:23:08.133 "peer_address": { 00:23:08.133 "trtype": "TCP", 00:23:08.133 "adrfam": "IPv4", 00:23:08.133 "traddr": "10.0.0.1", 00:23:08.133 "trsvcid": "38344" 00:23:08.133 }, 00:23:08.133 "auth": { 00:23:08.133 "state": "completed", 00:23:08.133 "digest": "sha384", 00:23:08.133 "dhgroup": "ffdhe2048" 00:23:08.133 } 00:23:08.133 } 00:23:08.133 ]' 00:23:08.133 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:08.395 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:08.395 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:08.395 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:08.395 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:08.395 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.395 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.395 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.656 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:23:09.227 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.227 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:09.227 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.227 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.227 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.227 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.227 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:09.227 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:09.227 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.489 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.750 00:23:09.750 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:09.750 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:09.750 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:10.011 { 00:23:10.011 "cntlid": 65, 00:23:10.011 "qid": 0, 00:23:10.011 "state": "enabled", 00:23:10.011 "thread": "nvmf_tgt_poll_group_000", 00:23:10.011 "listen_address": { 00:23:10.011 "trtype": "TCP", 00:23:10.011 "adrfam": "IPv4", 00:23:10.011 "traddr": "10.0.0.2", 00:23:10.011 "trsvcid": "4420" 00:23:10.011 }, 00:23:10.011 "peer_address": { 00:23:10.011 "trtype": "TCP", 00:23:10.011 "adrfam": "IPv4", 00:23:10.011 "traddr": "10.0.0.1", 00:23:10.011 "trsvcid": "38384" 00:23:10.011 }, 00:23:10.011 "auth": { 00:23:10.011 "state": "completed", 00:23:10.011 "digest": "sha384", 00:23:10.011 "dhgroup": "ffdhe3072" 00:23:10.011 } 00:23:10.011 } 00:23:10.011 ]' 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.011 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.272 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:23:10.845 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.845 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.845 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.845 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.845 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.106 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:11.106 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:11.106 20:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:11.106 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:23:11.106 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:11.106 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:11.106 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:11.106 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:11.106 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.106 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.106 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.106 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.106 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.106 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.107 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.381 00:23:11.381 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:11.381 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:11.381 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:11.643 { 00:23:11.643 "cntlid": 67, 00:23:11.643 "qid": 0, 00:23:11.643 "state": "enabled", 00:23:11.643 "thread": "nvmf_tgt_poll_group_000", 00:23:11.643 "listen_address": { 00:23:11.643 "trtype": "TCP", 00:23:11.643 "adrfam": "IPv4", 00:23:11.643 "traddr": "10.0.0.2", 00:23:11.643 "trsvcid": "4420" 00:23:11.643 }, 00:23:11.643 "peer_address": { 00:23:11.643 "trtype": "TCP", 00:23:11.643 "adrfam": "IPv4", 00:23:11.643 "traddr": "10.0.0.1", 00:23:11.643 "trsvcid": "38410" 00:23:11.643 }, 00:23:11.643 "auth": { 00:23:11.643 "state": "completed", 00:23:11.643 "digest": "sha384", 00:23:11.643 "dhgroup": "ffdhe3072" 00:23:11.643 } 00:23:11.643 } 00:23:11.643 ]' 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.643 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.904 20:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.847 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.108 00:23:13.108 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.108 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:13.108 20:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.368 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.368 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.368 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.368 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.368 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.368 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:13.368 { 00:23:13.368 "cntlid": 69, 00:23:13.368 "qid": 0, 00:23:13.368 "state": "enabled", 00:23:13.368 "thread": "nvmf_tgt_poll_group_000", 00:23:13.368 "listen_address": { 00:23:13.368 "trtype": "TCP", 00:23:13.368 "adrfam": "IPv4", 00:23:13.368 "traddr": "10.0.0.2", 00:23:13.368 "trsvcid": "4420" 00:23:13.368 }, 00:23:13.368 "peer_address": { 00:23:13.368 "trtype": "TCP", 00:23:13.368 "adrfam": "IPv4", 00:23:13.368 "traddr": "10.0.0.1", 00:23:13.368 "trsvcid": "33300" 00:23:13.368 }, 00:23:13.368 "auth": { 00:23:13.368 "state": "completed", 00:23:13.368 "digest": "sha384", 00:23:13.368 "dhgroup": "ffdhe3072" 00:23:13.368 } 00:23:13.368 } 00:23:13.368 ]' 00:23:13.368 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:13.368 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:13.369 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:13.369 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:13.369 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:13.369 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.369 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.369 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.629 20:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:23:14.201 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.201 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:14.201 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.201 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:14.462 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:14.723 00:23:14.723 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:14.723 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:14.723 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:14.984 { 00:23:14.984 "cntlid": 71, 00:23:14.984 "qid": 0, 00:23:14.984 "state": "enabled", 00:23:14.984 "thread": "nvmf_tgt_poll_group_000", 00:23:14.984 "listen_address": { 00:23:14.984 "trtype": "TCP", 00:23:14.984 "adrfam": "IPv4", 00:23:14.984 "traddr": "10.0.0.2", 00:23:14.984 "trsvcid": "4420" 00:23:14.984 }, 00:23:14.984 "peer_address": { 00:23:14.984 "trtype": "TCP", 00:23:14.984 "adrfam": "IPv4", 00:23:14.984 "traddr": "10.0.0.1", 00:23:14.984 "trsvcid": "33312" 00:23:14.984 }, 00:23:14.984 "auth": { 00:23:14.984 "state": "completed", 00:23:14.984 "digest": "sha384", 00:23:14.984 "dhgroup": "ffdhe3072" 00:23:14.984 } 00:23:14.984 } 00:23:14.984 ]' 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.984 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.244 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:23:16.186 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.186 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.186 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.186 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.186 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.186 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.186 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:16.186 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:16.186 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.186 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.447 00:23:16.447 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:16.447 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:16.447 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:16.708 { 00:23:16.708 "cntlid": 73, 00:23:16.708 "qid": 0, 00:23:16.708 "state": "enabled", 00:23:16.708 "thread": "nvmf_tgt_poll_group_000", 00:23:16.708 "listen_address": { 00:23:16.708 "trtype": "TCP", 00:23:16.708 "adrfam": "IPv4", 00:23:16.708 "traddr": "10.0.0.2", 00:23:16.708 "trsvcid": "4420" 00:23:16.708 }, 00:23:16.708 "peer_address": { 00:23:16.708 "trtype": "TCP", 00:23:16.708 "adrfam": "IPv4", 00:23:16.708 "traddr": "10.0.0.1", 00:23:16.708 "trsvcid": "33340" 00:23:16.708 }, 00:23:16.708 "auth": { 00:23:16.708 "state": "completed", 00:23:16.708 "digest": "sha384", 00:23:16.708 "dhgroup": "ffdhe4096" 00:23:16.708 } 00:23:16.708 } 00:23:16.708 ]' 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.708 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.969 20:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:23:17.541 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.541 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.541 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.541 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.541 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.541 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:17.541 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:17.541 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:17.801 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:23:17.801 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:17.801 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:17.801 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:17.801 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:17.801 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.801 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.801 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.801 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.801 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.801 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.802 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.062 00:23:18.062 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:18.062 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:18.062 20:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:18.323 { 00:23:18.323 "cntlid": 75, 00:23:18.323 "qid": 0, 00:23:18.323 "state": "enabled", 00:23:18.323 "thread": "nvmf_tgt_poll_group_000", 00:23:18.323 "listen_address": { 00:23:18.323 "trtype": "TCP", 00:23:18.323 "adrfam": "IPv4", 00:23:18.323 "traddr": "10.0.0.2", 00:23:18.323 "trsvcid": "4420" 00:23:18.323 }, 00:23:18.323 "peer_address": { 00:23:18.323 "trtype": "TCP", 00:23:18.323 "adrfam": "IPv4", 00:23:18.323 "traddr": "10.0.0.1", 00:23:18.323 "trsvcid": "33362" 00:23:18.323 }, 00:23:18.323 "auth": { 00:23:18.323 "state": "completed", 00:23:18.323 "digest": "sha384", 00:23:18.323 "dhgroup": "ffdhe4096" 00:23:18.323 } 00:23:18.323 } 00:23:18.323 ]' 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.323 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.584 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:23:19.527 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.527 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:19.527 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.527 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.528 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.788 00:23:19.788 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:19.788 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.788 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:20.049 { 00:23:20.049 "cntlid": 77, 00:23:20.049 "qid": 0, 00:23:20.049 "state": "enabled", 00:23:20.049 "thread": "nvmf_tgt_poll_group_000", 00:23:20.049 "listen_address": { 00:23:20.049 "trtype": "TCP", 00:23:20.049 "adrfam": "IPv4", 00:23:20.049 "traddr": "10.0.0.2", 00:23:20.049 "trsvcid": "4420" 00:23:20.049 }, 00:23:20.049 "peer_address": { 00:23:20.049 "trtype": "TCP", 00:23:20.049 "adrfam": "IPv4", 00:23:20.049 "traddr": "10.0.0.1", 00:23:20.049 "trsvcid": "33384" 00:23:20.049 }, 00:23:20.049 "auth": { 00:23:20.049 "state": "completed", 00:23:20.049 "digest": "sha384", 00:23:20.049 "dhgroup": "ffdhe4096" 00:23:20.049 } 00:23:20.049 } 00:23:20.049 ]' 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.049 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.310 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:23:20.881 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.881 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:20.881 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.881 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.142 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.143 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:21.143 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.143 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:21.143 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:21.404 00:23:21.404 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:21.404 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.404 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:21.664 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.664 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.664 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.664 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.664 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.664 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:21.664 { 00:23:21.664 "cntlid": 79, 00:23:21.664 "qid": 0, 00:23:21.664 "state": "enabled", 00:23:21.664 "thread": "nvmf_tgt_poll_group_000", 00:23:21.664 "listen_address": { 00:23:21.664 "trtype": "TCP", 00:23:21.664 "adrfam": "IPv4", 00:23:21.664 "traddr": "10.0.0.2", 00:23:21.664 "trsvcid": "4420" 00:23:21.664 }, 00:23:21.664 "peer_address": { 00:23:21.664 "trtype": "TCP", 00:23:21.664 "adrfam": "IPv4", 00:23:21.664 "traddr": "10.0.0.1", 00:23:21.664 "trsvcid": "33420" 00:23:21.664 }, 00:23:21.664 "auth": { 00:23:21.664 "state": "completed", 00:23:21.664 "digest": "sha384", 00:23:21.664 "dhgroup": "ffdhe4096" 00:23:21.665 } 00:23:21.665 } 00:23:21.665 ]' 00:23:21.665 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:21.665 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:21.665 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:21.665 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:21.665 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:21.665 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.665 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.665 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.926 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.869 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.870 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.870 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.130 00:23:23.130 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:23.130 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:23.130 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:23.450 { 00:23:23.450 "cntlid": 81, 00:23:23.450 "qid": 0, 00:23:23.450 "state": "enabled", 00:23:23.450 "thread": "nvmf_tgt_poll_group_000", 00:23:23.450 "listen_address": { 00:23:23.450 "trtype": "TCP", 00:23:23.450 "adrfam": "IPv4", 00:23:23.450 "traddr": "10.0.0.2", 00:23:23.450 "trsvcid": "4420" 00:23:23.450 }, 00:23:23.450 "peer_address": { 00:23:23.450 "trtype": "TCP", 00:23:23.450 "adrfam": "IPv4", 00:23:23.450 "traddr": "10.0.0.1", 00:23:23.450 "trsvcid": "43636" 00:23:23.450 }, 00:23:23.450 "auth": { 00:23:23.450 "state": "completed", 00:23:23.450 "digest": "sha384", 00:23:23.450 "dhgroup": "ffdhe6144" 00:23:23.450 } 00:23:23.450 } 00:23:23.450 ]' 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.450 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.711 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:23:24.281 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.542 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.113 00:23:25.113 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:25.113 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:25.113 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.113 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.113 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.113 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.113 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.113 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.113 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:25.113 { 00:23:25.113 "cntlid": 83, 00:23:25.113 "qid": 0, 00:23:25.113 "state": "enabled", 00:23:25.113 "thread": "nvmf_tgt_poll_group_000", 00:23:25.113 "listen_address": { 00:23:25.113 "trtype": "TCP", 00:23:25.113 "adrfam": "IPv4", 00:23:25.113 "traddr": "10.0.0.2", 00:23:25.113 "trsvcid": "4420" 00:23:25.113 }, 00:23:25.113 "peer_address": { 00:23:25.113 "trtype": "TCP", 00:23:25.113 "adrfam": "IPv4", 00:23:25.113 "traddr": "10.0.0.1", 00:23:25.113 "trsvcid": "43658" 00:23:25.113 }, 00:23:25.113 "auth": { 00:23:25.113 "state": "completed", 00:23:25.113 "digest": "sha384", 00:23:25.113 "dhgroup": "ffdhe6144" 00:23:25.113 } 00:23:25.113 } 00:23:25.113 ]' 00:23:25.113 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:25.113 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:25.113 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:25.113 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:25.113 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:25.374 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.374 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.374 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.374 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.316 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.317 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.317 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.317 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.887 00:23:26.887 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:26.887 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.887 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:26.887 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.887 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.887 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.887 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.887 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.887 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:26.887 { 00:23:26.887 "cntlid": 85, 00:23:26.887 "qid": 0, 00:23:26.887 "state": "enabled", 00:23:26.887 "thread": "nvmf_tgt_poll_group_000", 00:23:26.887 "listen_address": { 00:23:26.887 "trtype": "TCP", 00:23:26.887 "adrfam": "IPv4", 00:23:26.887 "traddr": "10.0.0.2", 00:23:26.887 "trsvcid": "4420" 00:23:26.887 }, 00:23:26.887 "peer_address": { 00:23:26.887 "trtype": "TCP", 00:23:26.887 "adrfam": "IPv4", 00:23:26.887 "traddr": "10.0.0.1", 00:23:26.887 "trsvcid": "43674" 00:23:26.887 }, 00:23:26.888 "auth": { 00:23:26.888 "state": "completed", 00:23:26.888 "digest": "sha384", 00:23:26.888 "dhgroup": "ffdhe6144" 00:23:26.888 } 00:23:26.888 } 00:23:26.888 ]' 00:23:26.888 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:26.888 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:26.888 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:26.888 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:26.888 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:27.148 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.148 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.148 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.148 20:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:23:28.153 20:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.153 20:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.153 20:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.153 20:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.153 20:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.153 20:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:28.153 20:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:28.153 20:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:28.153 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:28.412 00:23:28.412 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:28.412 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:28.412 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.672 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.672 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.672 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.672 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.672 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.672 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:28.672 { 00:23:28.672 "cntlid": 87, 00:23:28.672 "qid": 0, 00:23:28.672 "state": "enabled", 00:23:28.672 "thread": "nvmf_tgt_poll_group_000", 00:23:28.672 "listen_address": { 00:23:28.672 "trtype": "TCP", 00:23:28.672 "adrfam": "IPv4", 00:23:28.672 "traddr": "10.0.0.2", 00:23:28.672 "trsvcid": "4420" 00:23:28.672 }, 00:23:28.672 "peer_address": { 00:23:28.672 "trtype": "TCP", 00:23:28.672 "adrfam": "IPv4", 00:23:28.672 "traddr": "10.0.0.1", 00:23:28.672 "trsvcid": "43698" 00:23:28.672 }, 00:23:28.672 "auth": { 00:23:28.672 "state": "completed", 00:23:28.672 "digest": "sha384", 00:23:28.672 "dhgroup": "ffdhe6144" 00:23:28.672 } 00:23:28.672 } 00:23:28.672 ]' 00:23:28.672 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:28.672 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:28.672 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:28.672 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:28.672 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:28.932 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.932 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.932 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.932 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.875 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.445 00:23:30.446 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:30.446 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.446 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:30.706 { 00:23:30.706 "cntlid": 89, 00:23:30.706 "qid": 0, 00:23:30.706 "state": "enabled", 00:23:30.706 "thread": "nvmf_tgt_poll_group_000", 00:23:30.706 "listen_address": { 00:23:30.706 "trtype": "TCP", 00:23:30.706 "adrfam": "IPv4", 00:23:30.706 "traddr": "10.0.0.2", 00:23:30.706 "trsvcid": "4420" 00:23:30.706 }, 00:23:30.706 "peer_address": { 00:23:30.706 "trtype": "TCP", 00:23:30.706 "adrfam": "IPv4", 00:23:30.706 "traddr": "10.0.0.1", 00:23:30.706 "trsvcid": "43714" 00:23:30.706 }, 00:23:30.706 "auth": { 00:23:30.706 "state": "completed", 00:23:30.706 "digest": "sha384", 00:23:30.706 "dhgroup": "ffdhe8192" 00:23:30.706 } 00:23:30.706 } 00:23:30.706 ]' 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.706 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.967 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:23:31.537 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.537 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.537 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.537 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.797 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.367 00:23:32.367 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:32.367 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:32.367 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:32.627 { 00:23:32.627 "cntlid": 91, 00:23:32.627 "qid": 0, 00:23:32.627 "state": "enabled", 00:23:32.627 "thread": "nvmf_tgt_poll_group_000", 00:23:32.627 "listen_address": { 00:23:32.627 "trtype": "TCP", 00:23:32.627 "adrfam": "IPv4", 00:23:32.627 "traddr": "10.0.0.2", 00:23:32.627 "trsvcid": "4420" 00:23:32.627 }, 00:23:32.627 "peer_address": { 00:23:32.627 "trtype": "TCP", 00:23:32.627 "adrfam": "IPv4", 00:23:32.627 "traddr": "10.0.0.1", 00:23:32.627 "trsvcid": "42116" 00:23:32.627 }, 00:23:32.627 "auth": { 00:23:32.627 "state": "completed", 00:23:32.627 "digest": "sha384", 00:23:32.627 "dhgroup": "ffdhe8192" 00:23:32.627 } 00:23:32.627 } 00:23:32.627 ]' 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.627 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.628 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.888 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.829 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.401 00:23:34.401 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:34.401 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:34.401 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.401 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.401 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.401 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.401 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.401 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.401 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:34.401 { 00:23:34.401 "cntlid": 93, 00:23:34.401 "qid": 0, 00:23:34.401 "state": "enabled", 00:23:34.401 "thread": "nvmf_tgt_poll_group_000", 00:23:34.401 "listen_address": { 00:23:34.401 "trtype": "TCP", 00:23:34.401 "adrfam": "IPv4", 00:23:34.401 "traddr": "10.0.0.2", 00:23:34.401 "trsvcid": "4420" 00:23:34.401 }, 00:23:34.401 "peer_address": { 00:23:34.401 "trtype": "TCP", 00:23:34.401 "adrfam": "IPv4", 00:23:34.401 "traddr": "10.0.0.1", 00:23:34.401 "trsvcid": "42138" 00:23:34.401 }, 00:23:34.401 "auth": { 00:23:34.401 "state": "completed", 00:23:34.401 "digest": "sha384", 00:23:34.401 "dhgroup": "ffdhe8192" 00:23:34.401 } 00:23:34.401 } 00:23:34.401 ]' 00:23:34.401 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:34.662 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:34.662 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:34.662 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:34.662 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:34.662 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.662 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.662 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.923 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:23:35.494 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.494 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:35.494 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.494 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.494 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.494 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:35.494 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.494 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:35.754 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:36.325 00:23:36.325 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:36.325 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:36.325 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.325 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.325 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.325 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.325 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.586 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.586 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:36.586 { 00:23:36.586 "cntlid": 95, 00:23:36.586 "qid": 0, 00:23:36.586 "state": "enabled", 00:23:36.586 "thread": "nvmf_tgt_poll_group_000", 00:23:36.586 "listen_address": { 00:23:36.586 "trtype": "TCP", 00:23:36.586 "adrfam": "IPv4", 00:23:36.586 "traddr": "10.0.0.2", 00:23:36.586 "trsvcid": "4420" 00:23:36.586 }, 00:23:36.586 "peer_address": { 00:23:36.586 "trtype": "TCP", 00:23:36.586 "adrfam": "IPv4", 00:23:36.587 "traddr": "10.0.0.1", 00:23:36.587 "trsvcid": "42170" 00:23:36.587 }, 00:23:36.587 "auth": { 00:23:36.587 "state": "completed", 00:23:36.587 "digest": "sha384", 00:23:36.587 "dhgroup": "ffdhe8192" 00:23:36.587 } 00:23:36.587 } 00:23:36.587 ]' 00:23:36.587 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:36.587 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:36.587 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:36.587 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:36.587 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:36.587 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.587 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.587 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.847 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:23:37.418 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.418 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.418 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.418 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.418 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.418 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:23:37.418 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.418 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:37.418 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:37.418 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.679 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.940 00:23:37.940 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:37.940 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:37.940 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.201 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.201 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.201 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.201 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.201 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.201 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:38.201 { 00:23:38.201 "cntlid": 97, 00:23:38.201 "qid": 0, 00:23:38.201 "state": "enabled", 00:23:38.201 "thread": "nvmf_tgt_poll_group_000", 00:23:38.201 "listen_address": { 00:23:38.201 "trtype": "TCP", 00:23:38.201 "adrfam": "IPv4", 00:23:38.201 "traddr": "10.0.0.2", 00:23:38.201 "trsvcid": "4420" 00:23:38.201 }, 00:23:38.201 "peer_address": { 00:23:38.201 "trtype": "TCP", 00:23:38.201 "adrfam": "IPv4", 00:23:38.201 "traddr": "10.0.0.1", 00:23:38.201 "trsvcid": "42196" 00:23:38.201 }, 00:23:38.201 "auth": { 00:23:38.201 "state": "completed", 00:23:38.201 "digest": "sha512", 00:23:38.201 "dhgroup": "null" 00:23:38.201 } 00:23:38.201 } 00:23:38.201 ]' 00:23:38.201 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:38.201 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:38.201 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:38.201 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:38.201 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:38.201 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.201 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.201 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.462 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:23:39.034 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.295 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.556 00:23:39.556 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:39.556 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.556 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:39.817 { 00:23:39.817 "cntlid": 99, 00:23:39.817 "qid": 0, 00:23:39.817 "state": "enabled", 00:23:39.817 "thread": "nvmf_tgt_poll_group_000", 00:23:39.817 "listen_address": { 00:23:39.817 "trtype": "TCP", 00:23:39.817 "adrfam": "IPv4", 00:23:39.817 "traddr": "10.0.0.2", 00:23:39.817 "trsvcid": "4420" 00:23:39.817 }, 00:23:39.817 "peer_address": { 00:23:39.817 "trtype": "TCP", 00:23:39.817 "adrfam": "IPv4", 00:23:39.817 "traddr": "10.0.0.1", 00:23:39.817 "trsvcid": "42232" 00:23:39.817 }, 00:23:39.817 "auth": { 00:23:39.817 "state": "completed", 00:23:39.817 "digest": "sha512", 00:23:39.817 "dhgroup": "null" 00:23:39.817 } 00:23:39.817 } 00:23:39.817 ]' 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.817 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.078 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:23:40.649 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.649 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.649 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.649 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.649 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.649 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:40.649 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:40.649 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:40.910 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:23:40.910 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:40.910 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:40.910 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:40.910 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:40.910 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:40.910 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.910 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.910 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.910 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.910 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.911 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.171 00:23:41.171 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:41.171 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:41.171 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:41.431 { 00:23:41.431 "cntlid": 101, 00:23:41.431 "qid": 0, 00:23:41.431 "state": "enabled", 00:23:41.431 "thread": "nvmf_tgt_poll_group_000", 00:23:41.431 "listen_address": { 00:23:41.431 "trtype": "TCP", 00:23:41.431 "adrfam": "IPv4", 00:23:41.431 "traddr": "10.0.0.2", 00:23:41.431 "trsvcid": "4420" 00:23:41.431 }, 00:23:41.431 "peer_address": { 00:23:41.431 "trtype": "TCP", 00:23:41.431 "adrfam": "IPv4", 00:23:41.431 "traddr": "10.0.0.1", 00:23:41.431 "trsvcid": "42270" 00:23:41.431 }, 00:23:41.431 "auth": { 00:23:41.431 "state": "completed", 00:23:41.431 "digest": "sha512", 00:23:41.431 "dhgroup": "null" 00:23:41.431 } 00:23:41.431 } 00:23:41.431 ]' 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.431 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.692 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:23:42.263 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.263 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.523 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.523 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:42.524 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:42.784 00:23:42.784 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:42.784 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:42.784 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:43.045 { 00:23:43.045 "cntlid": 103, 00:23:43.045 "qid": 0, 00:23:43.045 "state": "enabled", 00:23:43.045 "thread": "nvmf_tgt_poll_group_000", 00:23:43.045 "listen_address": { 00:23:43.045 "trtype": "TCP", 00:23:43.045 "adrfam": "IPv4", 00:23:43.045 "traddr": "10.0.0.2", 00:23:43.045 "trsvcid": "4420" 00:23:43.045 }, 00:23:43.045 "peer_address": { 00:23:43.045 "trtype": "TCP", 00:23:43.045 "adrfam": "IPv4", 00:23:43.045 "traddr": "10.0.0.1", 00:23:43.045 "trsvcid": "53960" 00:23:43.045 }, 00:23:43.045 "auth": { 00:23:43.045 "state": "completed", 00:23:43.045 "digest": "sha512", 00:23:43.045 "dhgroup": "null" 00:23:43.045 } 00:23:43.045 } 00:23:43.045 ]' 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.045 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.306 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:23:44.248 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:44.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:44.248 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:44.248 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.248 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.248 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.248 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.248 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:44.248 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:44.248 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.248 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.509 00:23:44.509 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:44.509 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:44.509 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.509 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.509 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.509 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.509 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.770 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.770 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:44.770 { 00:23:44.770 "cntlid": 105, 00:23:44.770 "qid": 0, 00:23:44.770 "state": "enabled", 00:23:44.770 "thread": "nvmf_tgt_poll_group_000", 00:23:44.770 "listen_address": { 00:23:44.770 "trtype": "TCP", 00:23:44.770 "adrfam": "IPv4", 00:23:44.770 "traddr": "10.0.0.2", 00:23:44.770 "trsvcid": "4420" 00:23:44.770 }, 00:23:44.770 "peer_address": { 00:23:44.770 "trtype": "TCP", 00:23:44.770 "adrfam": "IPv4", 00:23:44.770 "traddr": "10.0.0.1", 00:23:44.770 "trsvcid": "53988" 00:23:44.770 }, 00:23:44.770 "auth": { 00:23:44.770 "state": "completed", 00:23:44.770 "digest": "sha512", 00:23:44.770 "dhgroup": "ffdhe2048" 00:23:44.770 } 00:23:44.770 } 00:23:44.770 ]' 00:23:44.770 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:44.770 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:44.770 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:44.770 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:44.770 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:44.770 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.770 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.770 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:45.031 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:23:45.604 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.604 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.604 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.604 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.604 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.604 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:45.604 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:45.604 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:45.864 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:23:45.864 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:45.864 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:45.865 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:45.865 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:45.865 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:45.865 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.865 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.865 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.865 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.865 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.865 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.125 00:23:46.125 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:46.125 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:46.125 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.125 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.125 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.125 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.125 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.386 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.386 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:46.386 { 00:23:46.386 "cntlid": 107, 00:23:46.386 "qid": 0, 00:23:46.386 "state": "enabled", 00:23:46.386 "thread": "nvmf_tgt_poll_group_000", 00:23:46.386 "listen_address": { 00:23:46.386 "trtype": "TCP", 00:23:46.386 "adrfam": "IPv4", 00:23:46.386 "traddr": "10.0.0.2", 00:23:46.386 "trsvcid": "4420" 00:23:46.386 }, 00:23:46.386 "peer_address": { 00:23:46.386 "trtype": "TCP", 00:23:46.386 "adrfam": "IPv4", 00:23:46.386 "traddr": "10.0.0.1", 00:23:46.386 "trsvcid": "54018" 00:23:46.386 }, 00:23:46.386 "auth": { 00:23:46.386 "state": "completed", 00:23:46.386 "digest": "sha512", 00:23:46.386 "dhgroup": "ffdhe2048" 00:23:46.386 } 00:23:46.386 } 00:23:46.386 ]' 00:23:46.386 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:46.386 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:46.386 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:46.386 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:46.386 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:46.386 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.386 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.386 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.646 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:23:47.258 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.258 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.258 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.258 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.258 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.258 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:47.258 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:47.258 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:47.548 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:23:47.548 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:47.548 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:47.548 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:47.548 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:47.548 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.548 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.548 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.548 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.548 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.549 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.549 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.810 00:23:47.810 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:47.810 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:47.810 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.810 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.810 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:47.810 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.810 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.810 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.810 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:47.810 { 00:23:47.810 "cntlid": 109, 00:23:47.810 "qid": 0, 00:23:47.810 "state": "enabled", 00:23:47.810 "thread": "nvmf_tgt_poll_group_000", 00:23:47.810 "listen_address": { 00:23:47.810 "trtype": "TCP", 00:23:47.810 "adrfam": "IPv4", 00:23:47.810 "traddr": "10.0.0.2", 00:23:47.810 "trsvcid": "4420" 00:23:47.810 }, 00:23:47.810 "peer_address": { 00:23:47.810 "trtype": "TCP", 00:23:47.810 "adrfam": "IPv4", 00:23:47.810 "traddr": "10.0.0.1", 00:23:47.810 "trsvcid": "54046" 00:23:47.810 }, 00:23:47.810 "auth": { 00:23:47.810 "state": "completed", 00:23:47.810 "digest": "sha512", 00:23:47.810 "dhgroup": "ffdhe2048" 00:23:47.810 } 00:23:47.810 } 00:23:47.810 ]' 00:23:47.810 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:48.070 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:48.070 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:48.070 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:48.070 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:48.070 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.070 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.070 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.331 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:23:48.903 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:48.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:48.903 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:48.903 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.903 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.903 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.903 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:48.903 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:48.903 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:49.164 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:49.425 00:23:49.425 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:49.425 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:49.425 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.425 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.425 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:49.425 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.425 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.425 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.425 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:49.425 { 00:23:49.425 "cntlid": 111, 00:23:49.425 "qid": 0, 00:23:49.425 "state": "enabled", 00:23:49.425 "thread": "nvmf_tgt_poll_group_000", 00:23:49.425 "listen_address": { 00:23:49.425 "trtype": "TCP", 00:23:49.425 "adrfam": "IPv4", 00:23:49.425 "traddr": "10.0.0.2", 00:23:49.425 "trsvcid": "4420" 00:23:49.425 }, 00:23:49.425 "peer_address": { 00:23:49.425 "trtype": "TCP", 00:23:49.425 "adrfam": "IPv4", 00:23:49.425 "traddr": "10.0.0.1", 00:23:49.425 "trsvcid": "54084" 00:23:49.425 }, 00:23:49.425 "auth": { 00:23:49.425 "state": "completed", 00:23:49.425 "digest": "sha512", 00:23:49.425 "dhgroup": "ffdhe2048" 00:23:49.425 } 00:23:49.425 } 00:23:49.425 ]' 00:23:49.425 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:49.686 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:49.686 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:49.686 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:49.686 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:49.686 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.686 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.686 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:49.947 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:23:50.517 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:50.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:50.517 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.517 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.517 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.517 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.517 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:50.517 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:50.517 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:50.517 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:50.778 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:23:50.778 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:50.778 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:50.778 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:50.778 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:50.778 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:50.778 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.778 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.778 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.778 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.779 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.779 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.039 00:23:51.039 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:51.039 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:51.039 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:51.300 { 00:23:51.300 "cntlid": 113, 00:23:51.300 "qid": 0, 00:23:51.300 "state": "enabled", 00:23:51.300 "thread": "nvmf_tgt_poll_group_000", 00:23:51.300 "listen_address": { 00:23:51.300 "trtype": "TCP", 00:23:51.300 "adrfam": "IPv4", 00:23:51.300 "traddr": "10.0.0.2", 00:23:51.300 "trsvcid": "4420" 00:23:51.300 }, 00:23:51.300 "peer_address": { 00:23:51.300 "trtype": "TCP", 00:23:51.300 "adrfam": "IPv4", 00:23:51.300 "traddr": "10.0.0.1", 00:23:51.300 "trsvcid": "54110" 00:23:51.300 }, 00:23:51.300 "auth": { 00:23:51.300 "state": "completed", 00:23:51.300 "digest": "sha512", 00:23:51.300 "dhgroup": "ffdhe3072" 00:23:51.300 } 00:23:51.300 } 00:23:51.300 ]' 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:51.300 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:51.560 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:23:52.131 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:52.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:52.131 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:52.132 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.132 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.132 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.132 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:52.132 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.132 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.393 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.654 00:23:52.654 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:52.654 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:52.654 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:52.915 { 00:23:52.915 "cntlid": 115, 00:23:52.915 "qid": 0, 00:23:52.915 "state": "enabled", 00:23:52.915 "thread": "nvmf_tgt_poll_group_000", 00:23:52.915 "listen_address": { 00:23:52.915 "trtype": "TCP", 00:23:52.915 "adrfam": "IPv4", 00:23:52.915 "traddr": "10.0.0.2", 00:23:52.915 "trsvcid": "4420" 00:23:52.915 }, 00:23:52.915 "peer_address": { 00:23:52.915 "trtype": "TCP", 00:23:52.915 "adrfam": "IPv4", 00:23:52.915 "traddr": "10.0.0.1", 00:23:52.915 "trsvcid": "51488" 00:23:52.915 }, 00:23:52.915 "auth": { 00:23:52.915 "state": "completed", 00:23:52.915 "digest": "sha512", 00:23:52.915 "dhgroup": "ffdhe3072" 00:23:52.915 } 00:23:52.915 } 00:23:52.915 ]' 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.915 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:53.175 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:23:53.745 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:54.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.006 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.266 00:23:54.266 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:54.266 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:54.266 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:54.526 { 00:23:54.526 "cntlid": 117, 00:23:54.526 "qid": 0, 00:23:54.526 "state": "enabled", 00:23:54.526 "thread": "nvmf_tgt_poll_group_000", 00:23:54.526 "listen_address": { 00:23:54.526 "trtype": "TCP", 00:23:54.526 "adrfam": "IPv4", 00:23:54.526 "traddr": "10.0.0.2", 00:23:54.526 "trsvcid": "4420" 00:23:54.526 }, 00:23:54.526 "peer_address": { 00:23:54.526 "trtype": "TCP", 00:23:54.526 "adrfam": "IPv4", 00:23:54.526 "traddr": "10.0.0.1", 00:23:54.526 "trsvcid": "51524" 00:23:54.526 }, 00:23:54.526 "auth": { 00:23:54.526 "state": "completed", 00:23:54.526 "digest": "sha512", 00:23:54.526 "dhgroup": "ffdhe3072" 00:23:54.526 } 00:23:54.526 } 00:23:54.526 ]' 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.526 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.787 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:55.727 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:55.728 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:55.728 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:55.728 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.728 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.728 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.728 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:55.728 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:55.988 00:23:55.988 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:55.988 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:55.988 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:55.988 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.988 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:55.988 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.988 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.988 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.988 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:55.988 { 00:23:55.988 "cntlid": 119, 00:23:55.988 "qid": 0, 00:23:55.988 "state": "enabled", 00:23:55.988 "thread": "nvmf_tgt_poll_group_000", 00:23:55.988 "listen_address": { 00:23:55.988 "trtype": "TCP", 00:23:55.988 "adrfam": "IPv4", 00:23:55.988 "traddr": "10.0.0.2", 00:23:55.988 "trsvcid": "4420" 00:23:55.988 }, 00:23:55.988 "peer_address": { 00:23:55.988 "trtype": "TCP", 00:23:55.988 "adrfam": "IPv4", 00:23:55.988 "traddr": "10.0.0.1", 00:23:55.988 "trsvcid": "51552" 00:23:55.988 }, 00:23:55.988 "auth": { 00:23:55.988 "state": "completed", 00:23:55.988 "digest": "sha512", 00:23:55.988 "dhgroup": "ffdhe3072" 00:23:55.988 } 00:23:55.988 } 00:23:55.988 ]' 00:23:56.248 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:56.248 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:56.248 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:56.248 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:56.248 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:56.248 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:56.248 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:56.248 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.508 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:23:57.078 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:57.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:57.078 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.078 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.078 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.078 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.078 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:57.078 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:57.078 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:57.078 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.338 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.599 00:23:57.599 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:57.599 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:57.599 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:57.859 { 00:23:57.859 "cntlid": 121, 00:23:57.859 "qid": 0, 00:23:57.859 "state": "enabled", 00:23:57.859 "thread": "nvmf_tgt_poll_group_000", 00:23:57.859 "listen_address": { 00:23:57.859 "trtype": "TCP", 00:23:57.859 "adrfam": "IPv4", 00:23:57.859 "traddr": "10.0.0.2", 00:23:57.859 "trsvcid": "4420" 00:23:57.859 }, 00:23:57.859 "peer_address": { 00:23:57.859 "trtype": "TCP", 00:23:57.859 "adrfam": "IPv4", 00:23:57.859 "traddr": "10.0.0.1", 00:23:57.859 "trsvcid": "51586" 00:23:57.859 }, 00:23:57.859 "auth": { 00:23:57.859 "state": "completed", 00:23:57.859 "digest": "sha512", 00:23:57.859 "dhgroup": "ffdhe4096" 00:23:57.859 } 00:23:57.859 } 00:23:57.859 ]' 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:57.859 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.120 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:23:58.690 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:58.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.951 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.212 00:23:59.212 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:59.212 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:59.212 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.474 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.474 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:59.474 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.474 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.474 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.474 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:59.474 { 00:23:59.474 "cntlid": 123, 00:23:59.474 "qid": 0, 00:23:59.474 "state": "enabled", 00:23:59.474 "thread": "nvmf_tgt_poll_group_000", 00:23:59.474 "listen_address": { 00:23:59.474 "trtype": "TCP", 00:23:59.474 "adrfam": "IPv4", 00:23:59.474 "traddr": "10.0.0.2", 00:23:59.474 "trsvcid": "4420" 00:23:59.474 }, 00:23:59.474 "peer_address": { 00:23:59.474 "trtype": "TCP", 00:23:59.474 "adrfam": "IPv4", 00:23:59.474 "traddr": "10.0.0.1", 00:23:59.474 "trsvcid": "51614" 00:23:59.474 }, 00:23:59.474 "auth": { 00:23:59.474 "state": "completed", 00:23:59.474 "digest": "sha512", 00:23:59.474 "dhgroup": "ffdhe4096" 00:23:59.474 } 00:23:59.474 } 00:23:59.474 ]' 00:23:59.474 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:59.474 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:59.474 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:59.474 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:59.474 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:59.735 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:59.735 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:59.735 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:59.735 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.678 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.939 00:24:00.939 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:00.939 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:00.939 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.200 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.200 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:01.200 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.200 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.200 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.200 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:01.200 { 00:24:01.200 "cntlid": 125, 00:24:01.200 "qid": 0, 00:24:01.200 "state": "enabled", 00:24:01.200 "thread": "nvmf_tgt_poll_group_000", 00:24:01.200 "listen_address": { 00:24:01.200 "trtype": "TCP", 00:24:01.200 "adrfam": "IPv4", 00:24:01.200 "traddr": "10.0.0.2", 00:24:01.200 "trsvcid": "4420" 00:24:01.200 }, 00:24:01.200 "peer_address": { 00:24:01.200 "trtype": "TCP", 00:24:01.200 "adrfam": "IPv4", 00:24:01.200 "traddr": "10.0.0.1", 00:24:01.200 "trsvcid": "51648" 00:24:01.200 }, 00:24:01.200 "auth": { 00:24:01.200 "state": "completed", 00:24:01.200 "digest": "sha512", 00:24:01.200 "dhgroup": "ffdhe4096" 00:24:01.200 } 00:24:01.200 } 00:24:01.200 ]' 00:24:01.200 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:01.200 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:01.201 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:01.201 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:01.201 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:01.201 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:01.201 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.201 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:01.461 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:24:02.033 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:02.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:02.293 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:02.293 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.293 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.293 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.293 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:02.293 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:02.293 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:02.293 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:24:02.293 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:02.294 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:02.294 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:02.294 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:02.294 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:02.294 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:02.294 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.294 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.294 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.294 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:02.294 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:02.555 00:24:02.555 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:02.555 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:02.555 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:02.816 { 00:24:02.816 "cntlid": 127, 00:24:02.816 "qid": 0, 00:24:02.816 "state": "enabled", 00:24:02.816 "thread": "nvmf_tgt_poll_group_000", 00:24:02.816 "listen_address": { 00:24:02.816 "trtype": "TCP", 00:24:02.816 "adrfam": "IPv4", 00:24:02.816 "traddr": "10.0.0.2", 00:24:02.816 "trsvcid": "4420" 00:24:02.816 }, 00:24:02.816 "peer_address": { 00:24:02.816 "trtype": "TCP", 00:24:02.816 "adrfam": "IPv4", 00:24:02.816 "traddr": "10.0.0.1", 00:24:02.816 "trsvcid": "37020" 00:24:02.816 }, 00:24:02.816 "auth": { 00:24:02.816 "state": "completed", 00:24:02.816 "digest": "sha512", 00:24:02.816 "dhgroup": "ffdhe4096" 00:24:02.816 } 00:24:02.816 } 00:24:02.816 ]' 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:02.816 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:03.077 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:04.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.019 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.280 00:24:04.280 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:04.280 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:04.280 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:04.541 { 00:24:04.541 "cntlid": 129, 00:24:04.541 "qid": 0, 00:24:04.541 "state": "enabled", 00:24:04.541 "thread": "nvmf_tgt_poll_group_000", 00:24:04.541 "listen_address": { 00:24:04.541 "trtype": "TCP", 00:24:04.541 "adrfam": "IPv4", 00:24:04.541 "traddr": "10.0.0.2", 00:24:04.541 "trsvcid": "4420" 00:24:04.541 }, 00:24:04.541 "peer_address": { 00:24:04.541 "trtype": "TCP", 00:24:04.541 "adrfam": "IPv4", 00:24:04.541 "traddr": "10.0.0.1", 00:24:04.541 "trsvcid": "37050" 00:24:04.541 }, 00:24:04.541 "auth": { 00:24:04.541 "state": "completed", 00:24:04.541 "digest": "sha512", 00:24:04.541 "dhgroup": "ffdhe6144" 00:24:04.541 } 00:24:04.541 } 00:24:04.541 ]' 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:04.541 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:04.802 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:05.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.741 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.003 00:24:06.003 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:06.003 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:06.003 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.263 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.263 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.263 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.263 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.263 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.263 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:06.263 { 00:24:06.263 "cntlid": 131, 00:24:06.263 "qid": 0, 00:24:06.263 "state": "enabled", 00:24:06.263 "thread": "nvmf_tgt_poll_group_000", 00:24:06.263 "listen_address": { 00:24:06.263 "trtype": "TCP", 00:24:06.263 "adrfam": "IPv4", 00:24:06.263 "traddr": "10.0.0.2", 00:24:06.263 "trsvcid": "4420" 00:24:06.263 }, 00:24:06.263 "peer_address": { 00:24:06.263 "trtype": "TCP", 00:24:06.263 "adrfam": "IPv4", 00:24:06.263 "traddr": "10.0.0.1", 00:24:06.263 "trsvcid": "37070" 00:24:06.263 }, 00:24:06.263 "auth": { 00:24:06.263 "state": "completed", 00:24:06.263 "digest": "sha512", 00:24:06.263 "dhgroup": "ffdhe6144" 00:24:06.263 } 00:24:06.263 } 00:24:06.263 ]' 00:24:06.263 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:06.263 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:06.263 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:06.263 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:06.263 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:06.524 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.524 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.524 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:06.524 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:24:07.193 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:07.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.454 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.025 00:24:08.025 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:08.025 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:08.025 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:08.025 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.025 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:08.025 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.025 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.025 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.025 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:08.025 { 00:24:08.025 "cntlid": 133, 00:24:08.025 "qid": 0, 00:24:08.025 "state": "enabled", 00:24:08.025 "thread": "nvmf_tgt_poll_group_000", 00:24:08.025 "listen_address": { 00:24:08.025 "trtype": "TCP", 00:24:08.025 "adrfam": "IPv4", 00:24:08.025 "traddr": "10.0.0.2", 00:24:08.025 "trsvcid": "4420" 00:24:08.025 }, 00:24:08.025 "peer_address": { 00:24:08.025 "trtype": "TCP", 00:24:08.025 "adrfam": "IPv4", 00:24:08.025 "traddr": "10.0.0.1", 00:24:08.025 "trsvcid": "37096" 00:24:08.025 }, 00:24:08.025 "auth": { 00:24:08.025 "state": "completed", 00:24:08.025 "digest": "sha512", 00:24:08.025 "dhgroup": "ffdhe6144" 00:24:08.025 } 00:24:08.025 } 00:24:08.025 ]' 00:24:08.025 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:08.025 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:08.025 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:08.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:08.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:08.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:08.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:24:09.226 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:09.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:09.226 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:09.226 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.226 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.226 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.226 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:09.226 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:09.226 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:09.514 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:24:09.514 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:09.514 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:09.514 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:09.514 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:09.514 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:09.514 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:09.514 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.515 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.515 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.515 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:09.515 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:09.775 00:24:09.775 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:09.775 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:09.775 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:10.035 { 00:24:10.035 "cntlid": 135, 00:24:10.035 "qid": 0, 00:24:10.035 "state": "enabled", 00:24:10.035 "thread": "nvmf_tgt_poll_group_000", 00:24:10.035 "listen_address": { 00:24:10.035 "trtype": "TCP", 00:24:10.035 "adrfam": "IPv4", 00:24:10.035 "traddr": "10.0.0.2", 00:24:10.035 "trsvcid": "4420" 00:24:10.035 }, 00:24:10.035 "peer_address": { 00:24:10.035 "trtype": "TCP", 00:24:10.035 "adrfam": "IPv4", 00:24:10.035 "traddr": "10.0.0.1", 00:24:10.035 "trsvcid": "37130" 00:24:10.035 }, 00:24:10.035 "auth": { 00:24:10.035 "state": "completed", 00:24:10.035 "digest": "sha512", 00:24:10.035 "dhgroup": "ffdhe6144" 00:24:10.035 } 00:24:10.035 } 00:24:10.035 ]' 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:10.035 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:10.294 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:24:10.864 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:10.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:10.864 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:10.864 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.864 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.864 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.864 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.864 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:10.864 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:10.864 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:11.124 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:24:11.124 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:11.124 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:11.124 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:11.124 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:11.124 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:11.124 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.124 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.124 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.124 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.124 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.125 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.695 00:24:11.695 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:11.695 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:11.695 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.956 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.956 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:11.956 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.956 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.956 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.956 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:11.956 { 00:24:11.956 "cntlid": 137, 00:24:11.956 "qid": 0, 00:24:11.956 "state": "enabled", 00:24:11.956 "thread": "nvmf_tgt_poll_group_000", 00:24:11.956 "listen_address": { 00:24:11.956 "trtype": "TCP", 00:24:11.956 "adrfam": "IPv4", 00:24:11.956 "traddr": "10.0.0.2", 00:24:11.956 "trsvcid": "4420" 00:24:11.956 }, 00:24:11.956 "peer_address": { 00:24:11.956 "trtype": "TCP", 00:24:11.956 "adrfam": "IPv4", 00:24:11.956 "traddr": "10.0.0.1", 00:24:11.957 "trsvcid": "37150" 00:24:11.957 }, 00:24:11.957 "auth": { 00:24:11.957 "state": "completed", 00:24:11.957 "digest": "sha512", 00:24:11.957 "dhgroup": "ffdhe8192" 00:24:11.957 } 00:24:11.957 } 00:24:11.957 ]' 00:24:11.957 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:11.957 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:11.957 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:11.957 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:11.957 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:11.957 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:11.957 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:11.957 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.218 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:24:12.790 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:12.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:12.790 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:12.790 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.790 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.790 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.790 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:12.790 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:12.790 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.052 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.623 00:24:13.623 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:13.623 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.623 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:13.884 { 00:24:13.884 "cntlid": 139, 00:24:13.884 "qid": 0, 00:24:13.884 "state": "enabled", 00:24:13.884 "thread": "nvmf_tgt_poll_group_000", 00:24:13.884 "listen_address": { 00:24:13.884 "trtype": "TCP", 00:24:13.884 "adrfam": "IPv4", 00:24:13.884 "traddr": "10.0.0.2", 00:24:13.884 "trsvcid": "4420" 00:24:13.884 }, 00:24:13.884 "peer_address": { 00:24:13.884 "trtype": "TCP", 00:24:13.884 "adrfam": "IPv4", 00:24:13.884 "traddr": "10.0.0.1", 00:24:13.884 "trsvcid": "57458" 00:24:13.884 }, 00:24:13.884 "auth": { 00:24:13.884 "state": "completed", 00:24:13.884 "digest": "sha512", 00:24:13.884 "dhgroup": "ffdhe8192" 00:24:13.884 } 00:24:13.884 } 00:24:13.884 ]' 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.884 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.145 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZDE5Y2JmMzJkYTU2NjQyYzFkZGUwMjExNDNlZWEzZDJ7Xoqq: --dhchap-ctrl-secret DHHC-1:02:ZmI5ZDM1Y2U0MDczODdlODAzMWU2ODBjNTgxMDdiODUyMjEwOWM1NzdhZjk2YzJjL/3gRw==: 00:24:14.716 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:14.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:14.977 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:14.977 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.977 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.977 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.977 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:14.977 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.978 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.549 00:24:15.549 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:15.549 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:15.549 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.810 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.810 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:15.810 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.810 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.810 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:15.810 { 00:24:15.810 "cntlid": 141, 00:24:15.810 "qid": 0, 00:24:15.810 "state": "enabled", 00:24:15.810 "thread": "nvmf_tgt_poll_group_000", 00:24:15.810 "listen_address": { 00:24:15.810 "trtype": "TCP", 00:24:15.810 "adrfam": "IPv4", 00:24:15.810 "traddr": "10.0.0.2", 00:24:15.810 "trsvcid": "4420" 00:24:15.810 }, 00:24:15.810 "peer_address": { 00:24:15.810 "trtype": "TCP", 00:24:15.810 "adrfam": "IPv4", 00:24:15.810 "traddr": "10.0.0.1", 00:24:15.810 "trsvcid": "57504" 00:24:15.810 }, 00:24:15.810 "auth": { 00:24:15.810 "state": "completed", 00:24:15.810 "digest": "sha512", 00:24:15.810 "dhgroup": "ffdhe8192" 00:24:15.810 } 00:24:15.810 } 00:24:15.810 ]' 00:24:15.810 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:15.810 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:15.810 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:15.810 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:15.811 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:15.811 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:15.811 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:15.811 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:16.071 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDlmMzJkNDQxMzlkOTcyOTY0MWMyMDZiOGZiYzAxYTM2YzljMGExZTAwN2VmN2UxakG/Pg==: --dhchap-ctrl-secret DHHC-1:01:ZDMwYTM0YjVkZjBjN2IyYTAyNGI4OTRiMTY2Y2U5ZjEPn/j/: 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:17.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:17.014 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:17.585 00:24:17.585 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:17.585 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:17.585 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.585 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.585 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.585 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.585 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.585 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.585 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:17.585 { 00:24:17.585 "cntlid": 143, 00:24:17.585 "qid": 0, 00:24:17.585 "state": "enabled", 00:24:17.585 "thread": "nvmf_tgt_poll_group_000", 00:24:17.585 "listen_address": { 00:24:17.585 "trtype": "TCP", 00:24:17.585 "adrfam": "IPv4", 00:24:17.585 "traddr": "10.0.0.2", 00:24:17.585 "trsvcid": "4420" 00:24:17.585 }, 00:24:17.585 "peer_address": { 00:24:17.585 "trtype": "TCP", 00:24:17.585 "adrfam": "IPv4", 00:24:17.585 "traddr": "10.0.0.1", 00:24:17.586 "trsvcid": "57536" 00:24:17.586 }, 00:24:17.586 "auth": { 00:24:17.586 "state": "completed", 00:24:17.586 "digest": "sha512", 00:24:17.586 "dhgroup": "ffdhe8192" 00:24:17.586 } 00:24:17.586 } 00:24:17.586 ]' 00:24:17.586 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:17.586 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:17.586 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:17.847 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:17.847 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:17.847 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:17.847 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:17.847 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:17.847 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.790 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.362 00:24:19.362 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:19.362 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:19.362 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:19.623 { 00:24:19.623 "cntlid": 145, 00:24:19.623 "qid": 0, 00:24:19.623 "state": "enabled", 00:24:19.623 "thread": "nvmf_tgt_poll_group_000", 00:24:19.623 "listen_address": { 00:24:19.623 "trtype": "TCP", 00:24:19.623 "adrfam": "IPv4", 00:24:19.623 "traddr": "10.0.0.2", 00:24:19.623 "trsvcid": "4420" 00:24:19.623 }, 00:24:19.623 "peer_address": { 00:24:19.623 "trtype": "TCP", 00:24:19.623 "adrfam": "IPv4", 00:24:19.623 "traddr": "10.0.0.1", 00:24:19.623 "trsvcid": "57562" 00:24:19.623 }, 00:24:19.623 "auth": { 00:24:19.623 "state": "completed", 00:24:19.623 "digest": "sha512", 00:24:19.623 "dhgroup": "ffdhe8192" 00:24:19.623 } 00:24:19.623 } 00:24:19.623 ]' 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:19.623 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:19.883 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:OTZmM2RlOTE2YzNiYzg2YzkwY2RkYzFiNDkyODJiN2Y5YjZmMTBhMDUxODZmZjU3UT2HNg==: --dhchap-ctrl-secret DHHC-1:03:MGJiMTA2Y2RhYWNhMjMyNTEzYzNkNzQ4MGQzYWQzNThmN2Q2MWIxOThlMGMyMDk0OTViMDUzY2I5YTA2ZmQwYmoP048=: 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:20.826 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:21.086 request: 00:24:21.086 { 00:24:21.086 "name": "nvme0", 00:24:21.086 "trtype": "tcp", 00:24:21.086 "traddr": "10.0.0.2", 00:24:21.086 "adrfam": "ipv4", 00:24:21.086 "trsvcid": "4420", 00:24:21.086 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:21.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:21.086 "prchk_reftag": false, 00:24:21.086 "prchk_guard": false, 00:24:21.086 "hdgst": false, 00:24:21.086 "ddgst": false, 00:24:21.086 "dhchap_key": "key2", 00:24:21.086 "method": "bdev_nvme_attach_controller", 00:24:21.086 "req_id": 1 00:24:21.086 } 00:24:21.086 Got JSON-RPC error response 00:24:21.086 response: 00:24:21.086 { 00:24:21.086 "code": -5, 00:24:21.086 "message": "Input/output error" 00:24:21.086 } 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.086 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.087 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:21.087 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:21.087 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:21.087 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:21.087 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.087 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:21.087 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.087 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:21.087 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:21.659 request: 00:24:21.659 { 00:24:21.659 "name": "nvme0", 00:24:21.659 "trtype": "tcp", 00:24:21.659 "traddr": "10.0.0.2", 00:24:21.659 "adrfam": "ipv4", 00:24:21.659 "trsvcid": "4420", 00:24:21.659 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:21.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:21.659 "prchk_reftag": false, 00:24:21.659 "prchk_guard": false, 00:24:21.659 "hdgst": false, 00:24:21.659 "ddgst": false, 00:24:21.659 "dhchap_key": "key1", 00:24:21.659 "dhchap_ctrlr_key": "ckey2", 00:24:21.659 "method": "bdev_nvme_attach_controller", 00:24:21.659 "req_id": 1 00:24:21.659 } 00:24:21.659 Got JSON-RPC error response 00:24:21.659 response: 00:24:21.659 { 00:24:21.659 "code": -5, 00:24:21.659 "message": "Input/output error" 00:24:21.659 } 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.659 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.231 request: 00:24:22.231 { 00:24:22.231 "name": "nvme0", 00:24:22.231 "trtype": "tcp", 00:24:22.231 "traddr": "10.0.0.2", 00:24:22.231 "adrfam": "ipv4", 00:24:22.231 "trsvcid": "4420", 00:24:22.231 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:22.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:22.231 "prchk_reftag": false, 00:24:22.231 "prchk_guard": false, 00:24:22.231 "hdgst": false, 00:24:22.231 "ddgst": false, 00:24:22.231 "dhchap_key": "key1", 00:24:22.231 "dhchap_ctrlr_key": "ckey1", 00:24:22.231 "method": "bdev_nvme_attach_controller", 00:24:22.231 "req_id": 1 00:24:22.231 } 00:24:22.231 Got JSON-RPC error response 00:24:22.231 response: 00:24:22.231 { 00:24:22.231 "code": -5, 00:24:22.231 "message": "Input/output error" 00:24:22.231 } 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3635587 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3635587 ']' 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3635587 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3635587 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3635587' 00:24:22.231 killing process with pid 3635587 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3635587 00:24:22.231 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3635587 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3662549 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3662549 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3662549 ']' 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.173 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3662549 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3662549 ']' 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.120 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.120 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.120 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:24:24.120 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:24:24.120 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.120 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.381 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:24.382 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:24.956 00:24:24.956 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:24.956 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:24.956 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:25.217 { 00:24:25.217 "cntlid": 1, 00:24:25.217 "qid": 0, 00:24:25.217 "state": "enabled", 00:24:25.217 "thread": "nvmf_tgt_poll_group_000", 00:24:25.217 "listen_address": { 00:24:25.217 "trtype": "TCP", 00:24:25.217 "adrfam": "IPv4", 00:24:25.217 "traddr": "10.0.0.2", 00:24:25.217 "trsvcid": "4420" 00:24:25.217 }, 00:24:25.217 "peer_address": { 00:24:25.217 "trtype": "TCP", 00:24:25.217 "adrfam": "IPv4", 00:24:25.217 "traddr": "10.0.0.1", 00:24:25.217 "trsvcid": "50020" 00:24:25.217 }, 00:24:25.217 "auth": { 00:24:25.217 "state": "completed", 00:24:25.217 "digest": "sha512", 00:24:25.217 "dhgroup": "ffdhe8192" 00:24:25.217 } 00:24:25.217 } 00:24:25.217 ]' 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:25.217 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:25.478 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NWVhYmM4NzQ0N2Y0MTY1NWNkOGI3YjgxMDI2NzQ5NDBmMjcwMTk5ZmE3ZTM1MDEwNWM1YWQ3M2YxNDE5MGNjYhYs00s=: 00:24:26.050 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:26.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:26.310 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.311 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:26.311 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.311 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:26.311 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.311 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.311 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.571 request: 00:24:26.571 { 00:24:26.571 "name": "nvme0", 00:24:26.571 "trtype": "tcp", 00:24:26.571 "traddr": "10.0.0.2", 00:24:26.571 "adrfam": "ipv4", 00:24:26.571 "trsvcid": "4420", 00:24:26.571 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:26.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:26.571 "prchk_reftag": false, 00:24:26.571 "prchk_guard": false, 00:24:26.571 "hdgst": false, 00:24:26.571 "ddgst": false, 00:24:26.571 "dhchap_key": "key3", 00:24:26.571 "method": "bdev_nvme_attach_controller", 00:24:26.571 "req_id": 1 00:24:26.571 } 00:24:26.571 Got JSON-RPC error response 00:24:26.571 response: 00:24:26.571 { 00:24:26.571 "code": -5, 00:24:26.571 "message": "Input/output error" 00:24:26.571 } 00:24:26.571 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:26.571 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:26.571 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:26.571 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:26.571 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:24:26.571 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:24:26.571 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:26.571 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.832 request: 00:24:26.832 { 00:24:26.832 "name": "nvme0", 00:24:26.832 "trtype": "tcp", 00:24:26.832 "traddr": "10.0.0.2", 00:24:26.832 "adrfam": "ipv4", 00:24:26.832 "trsvcid": "4420", 00:24:26.832 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:26.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:26.832 "prchk_reftag": false, 00:24:26.832 "prchk_guard": false, 00:24:26.832 "hdgst": false, 00:24:26.832 "ddgst": false, 00:24:26.832 "dhchap_key": "key3", 00:24:26.832 "method": "bdev_nvme_attach_controller", 00:24:26.832 "req_id": 1 00:24:26.832 } 00:24:26.832 Got JSON-RPC error response 00:24:26.832 response: 00:24:26.832 { 00:24:26.832 "code": -5, 00:24:26.832 "message": "Input/output error" 00:24:26.832 } 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:26.832 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:27.093 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:27.093 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.093 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.093 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.093 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:27.093 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.093 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.093 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.094 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:27.094 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:27.094 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:27.094 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:27.094 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:27.094 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:27.094 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:27.094 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:27.094 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:27.355 request: 00:24:27.355 { 00:24:27.355 "name": "nvme0", 00:24:27.355 "trtype": "tcp", 00:24:27.355 "traddr": "10.0.0.2", 00:24:27.355 "adrfam": "ipv4", 00:24:27.355 "trsvcid": "4420", 00:24:27.355 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:27.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:27.355 "prchk_reftag": false, 00:24:27.355 "prchk_guard": false, 00:24:27.355 "hdgst": false, 00:24:27.355 "ddgst": false, 00:24:27.355 "dhchap_key": "key0", 00:24:27.355 "dhchap_ctrlr_key": "key1", 00:24:27.355 "method": "bdev_nvme_attach_controller", 00:24:27.355 "req_id": 1 00:24:27.355 } 00:24:27.355 Got JSON-RPC error response 00:24:27.355 response: 00:24:27.355 { 00:24:27.355 "code": -5, 00:24:27.355 "message": "Input/output error" 00:24:27.355 } 00:24:27.355 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:27.355 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:27.355 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:27.355 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:27.355 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:27.355 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:27.355 00:24:27.355 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:24:27.355 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:27.355 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:24:27.616 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.616 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:27.616 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3635937 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3635937 ']' 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3635937 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3635937 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3635937' 00:24:27.876 killing process with pid 3635937 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3635937 00:24:27.876 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3635937 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.261 rmmod nvme_tcp 00:24:29.261 rmmod nvme_fabrics 00:24:29.261 rmmod nvme_keyring 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3662549 ']' 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3662549 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3662549 ']' 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3662549 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:29.261 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3662549 00:24:29.261 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:29.261 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:29.261 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3662549' 00:24:29.261 killing process with pid 3662549 00:24:29.261 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3662549 00:24:29.261 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3662549 00:24:30.203 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:30.203 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:30.204 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:30.204 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.204 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:30.204 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.204 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.204 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.118 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:32.118 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.jT5 /tmp/spdk.key-sha256.vPl /tmp/spdk.key-sha384.bjR /tmp/spdk.key-sha512.eBX /tmp/spdk.key-sha512.KWO /tmp/spdk.key-sha384.IEH /tmp/spdk.key-sha256.AlJ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:24:32.118 00:24:32.118 real 2m27.138s 00:24:32.118 user 5m25.316s 00:24:32.118 sys 0m21.535s 00:24:32.118 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:32.118 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.118 ************************************ 00:24:32.118 END TEST nvmf_auth_target 00:24:32.118 ************************************ 00:24:32.118 20:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:24:32.118 20:31:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:24:32.118 20:31:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:32.118 20:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:24:32.118 20:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:32.118 20:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:32.118 ************************************ 00:24:32.119 START TEST nvmf_bdevio_no_huge 00:24:32.119 ************************************ 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:32.119 * Looking for test storage... 00:24:32.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.119 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:24:32.381 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:38.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:38.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:38.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:38.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.971 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:39.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:24:39.232 00:24:39.232 --- 10.0.0.2 ping statistics --- 00:24:39.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.232 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:24:39.232 00:24:39.232 --- 10.0.0.1 ping statistics --- 00:24:39.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.232 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3667960 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3667960 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3667960 ']' 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.232 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:39.493 [2024-07-22 20:31:51.256940] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:39.493 [2024-07-22 20:31:51.257082] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:39.493 [2024-07-22 20:31:51.430044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.754 [2024-07-22 20:31:51.648063] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.754 [2024-07-22 20:31:51.648127] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.754 [2024-07-22 20:31:51.648143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.754 [2024-07-22 20:31:51.648154] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.754 [2024-07-22 20:31:51.648166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.754 [2024-07-22 20:31:51.648399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:39.754 [2024-07-22 20:31:51.648630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:39.754 [2024-07-22 20:31:51.649454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:39.755 [2024-07-22 20:31:51.649572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.016 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.016 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:24:40.016 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:40.016 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:40.016 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:40.277 [2024-07-22 20:31:52.071562] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:40.277 Malloc0 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:40.277 [2024-07-22 20:31:52.164094] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:40.277 { 00:24:40.277 "params": { 00:24:40.277 "name": "Nvme$subsystem", 00:24:40.277 "trtype": "$TEST_TRANSPORT", 00:24:40.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:40.277 "adrfam": "ipv4", 00:24:40.277 "trsvcid": "$NVMF_PORT", 00:24:40.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:40.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:40.277 "hdgst": ${hdgst:-false}, 00:24:40.277 "ddgst": ${ddgst:-false} 00:24:40.277 }, 00:24:40.277 "method": "bdev_nvme_attach_controller" 00:24:40.277 } 00:24:40.277 EOF 00:24:40.277 )") 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:24:40.277 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:40.277 "params": { 00:24:40.277 "name": "Nvme1", 00:24:40.277 "trtype": "tcp", 00:24:40.277 "traddr": "10.0.0.2", 00:24:40.277 "adrfam": "ipv4", 00:24:40.277 "trsvcid": "4420", 00:24:40.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:40.277 "hdgst": false, 00:24:40.277 "ddgst": false 00:24:40.277 }, 00:24:40.277 "method": "bdev_nvme_attach_controller" 00:24:40.277 }' 00:24:40.278 [2024-07-22 20:31:52.260454] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:40.278 [2024-07-22 20:31:52.260589] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3668305 ] 00:24:40.538 [2024-07-22 20:31:52.401418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:40.798 [2024-07-22 20:31:52.597185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.798 [2024-07-22 20:31:52.597272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.798 [2024-07-22 20:31:52.597497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.370 I/O targets: 00:24:41.370 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:41.370 00:24:41.370 00:24:41.370 CUnit - A unit testing framework for C - Version 2.1-3 00:24:41.370 http://cunit.sourceforge.net/ 00:24:41.370 00:24:41.370 00:24:41.370 Suite: bdevio tests on: Nvme1n1 00:24:41.370 Test: blockdev write read block ...passed 00:24:41.370 Test: blockdev write zeroes read block ...passed 00:24:41.370 Test: blockdev write zeroes read no split ...passed 00:24:41.370 Test: blockdev write zeroes read split ...passed 00:24:41.370 Test: blockdev write zeroes read split partial ...passed 00:24:41.370 Test: blockdev reset ...[2024-07-22 20:31:53.347897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.370 [2024-07-22 20:31:53.348005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000386600 (9): Bad file descriptor 00:24:41.370 [2024-07-22 20:31:53.366355] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:41.370 passed 00:24:41.370 Test: blockdev write read 8 blocks ...passed 00:24:41.370 Test: blockdev write read size > 128k ...passed 00:24:41.370 Test: blockdev write read invalid size ...passed 00:24:41.630 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:41.630 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:41.630 Test: blockdev write read max offset ...passed 00:24:41.630 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:41.630 Test: blockdev writev readv 8 blocks ...passed 00:24:41.630 Test: blockdev writev readv 30 x 1block ...passed 00:24:41.630 Test: blockdev writev readv block ...passed 00:24:41.630 Test: blockdev writev readv size > 128k ...passed 00:24:41.630 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:41.630 Test: blockdev comparev and writev ...[2024-07-22 20:31:53.636672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:41.630 [2024-07-22 20:31:53.636706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.630 [2024-07-22 20:31:53.636722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:41.630 [2024-07-22 20:31:53.636731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.630 [2024-07-22 20:31:53.637335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:41.630 [2024-07-22 20:31:53.637351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.630 [2024-07-22 20:31:53.637364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:41.630 [2024-07-22 20:31:53.637375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.630 [2024-07-22 20:31:53.637938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:41.630 [2024-07-22 20:31:53.637954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.630 [2024-07-22 20:31:53.637967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:41.630 [2024-07-22 20:31:53.637974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.630 [2024-07-22 20:31:53.638555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:41.630 [2024-07-22 20:31:53.638569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.630 [2024-07-22 20:31:53.638587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:41.630 [2024-07-22 20:31:53.638595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.890 passed 00:24:41.890 Test: blockdev nvme passthru rw ...passed 00:24:41.890 Test: blockdev nvme passthru vendor specific ...[2024-07-22 20:31:53.723174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:41.890 [2024-07-22 20:31:53.723195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.890 [2024-07-22 20:31:53.723626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:41.890 [2024-07-22 20:31:53.723638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.890 [2024-07-22 20:31:53.724014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:41.890 [2024-07-22 20:31:53.724027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.890 [2024-07-22 20:31:53.724433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:41.890 [2024-07-22 20:31:53.724445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.890 passed 00:24:41.890 Test: blockdev nvme admin passthru ...passed 00:24:41.890 Test: blockdev copy ...passed 00:24:41.890 00:24:41.890 Run Summary: Type Total Ran Passed Failed Inactive 00:24:41.890 suites 1 1 n/a 0 0 00:24:41.890 tests 23 23 23 0 0 00:24:41.890 asserts 152 152 152 0 n/a 00:24:41.890 00:24:41.890 Elapsed time = 1.460 seconds 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:42.461 rmmod nvme_tcp 00:24:42.461 rmmod nvme_fabrics 00:24:42.461 rmmod nvme_keyring 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3667960 ']' 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3667960 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3667960 ']' 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3667960 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3667960 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3667960' 00:24:42.461 killing process with pid 3667960 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3667960 00:24:42.461 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3667960 00:24:43.032 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:43.032 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:43.032 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:43.032 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.032 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.032 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.032 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.032 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.579 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:45.579 00:24:45.579 real 0m12.981s 00:24:45.579 user 0m18.750s 00:24:45.579 sys 0m6.356s 00:24:45.579 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:45.579 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:45.579 ************************************ 00:24:45.579 END TEST nvmf_bdevio_no_huge 00:24:45.579 ************************************ 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:45.579 ************************************ 00:24:45.579 START TEST nvmf_tls 00:24:45.579 ************************************ 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:45.579 * Looking for test storage... 00:24:45.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:24:45.579 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:52.210 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:52.210 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.210 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:52.211 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:52.211 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.211 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.471 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.471 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.471 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.471 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.471 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:24:52.471 00:24:52.471 --- 10.0.0.2 ping statistics --- 00:24:52.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.472 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:24:52.472 00:24:52.472 --- 10.0.0.1 ping statistics --- 00:24:52.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.472 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3672969 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3672969 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3672969 ']' 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.472 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.732 [2024-07-22 20:32:04.503691] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:52.732 [2024-07-22 20:32:04.503785] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.732 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.732 [2024-07-22 20:32:04.640714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.993 [2024-07-22 20:32:04.849527] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.993 [2024-07-22 20:32:04.849591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.993 [2024-07-22 20:32:04.849606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.993 [2024-07-22 20:32:04.849616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.993 [2024-07-22 20:32:04.849629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.993 [2024-07-22 20:32:04.849678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.565 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.565 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:53.565 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.565 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.565 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.565 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.565 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:24:53.565 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:53.565 true 00:24:53.565 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:53.565 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:24:53.827 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:24:53.827 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:24:53.827 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:54.088 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:54.088 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:24:54.088 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:24:54.088 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:24:54.088 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:54.350 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:54.350 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:24:54.350 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:24:54.350 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:24:54.350 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:54.350 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:24:54.611 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:24:54.611 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:24:54.611 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:54.872 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:54.872 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:24:54.872 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:24:54.872 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:24:54.872 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:55.133 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:55.133 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:24:55.394 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:24:55.394 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:24:55.394 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:55.394 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.GmaTotIEUl 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.RWEKb82mpj 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.GmaTotIEUl 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.RWEKb82mpj 00:24:55.395 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:55.656 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:55.917 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.GmaTotIEUl 00:24:55.917 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GmaTotIEUl 00:24:55.917 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:56.178 [2024-07-22 20:32:07.984769] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.178 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:56.178 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:56.438 [2024-07-22 20:32:08.313608] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:56.438 [2024-07-22 20:32:08.313837] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.438 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:56.699 malloc0 00:24:56.699 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:56.699 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GmaTotIEUl 00:24:56.960 [2024-07-22 20:32:08.798217] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:56.960 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.GmaTotIEUl 00:24:56.960 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.961 Initializing NVMe Controllers 00:25:06.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:06.961 Initialization complete. Launching workers. 00:25:06.961 ======================================================== 00:25:06.961 Latency(us) 00:25:06.961 Device Information : IOPS MiB/s Average min max 00:25:06.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15516.76 60.61 4124.76 1517.81 6685.08 00:25:06.961 ======================================================== 00:25:06.961 Total : 15516.76 60.61 4124.76 1517.81 6685.08 00:25:06.961 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GmaTotIEUl 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GmaTotIEUl' 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3675708 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3675708 /var/tmp/bdevperf.sock 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3675708 ']' 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.222 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:07.222 [2024-07-22 20:32:19.114986] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:07.222 [2024-07-22 20:32:19.115098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3675708 ] 00:25:07.222 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.222 [2024-07-22 20:32:19.210149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.483 [2024-07-22 20:32:19.344365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.058 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.058 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:08.058 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GmaTotIEUl 00:25:08.058 [2024-07-22 20:32:19.983255] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:08.058 [2024-07-22 20:32:19.983352] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:08.058 TLSTESTn1 00:25:08.318 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:08.318 Running I/O for 10 seconds... 00:25:18.316 00:25:18.316 Latency(us) 00:25:18.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.316 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:18.316 Verification LBA range: start 0x0 length 0x2000 00:25:18.316 TLSTESTn1 : 10.07 3971.39 15.51 0.00 0.00 32118.12 6417.07 95682.56 00:25:18.316 =================================================================================================================== 00:25:18.316 Total : 3971.39 15.51 0.00 0.00 32118.12 6417.07 95682.56 00:25:18.316 0 00:25:18.316 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:18.316 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3675708 00:25:18.316 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3675708 ']' 00:25:18.316 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3675708 00:25:18.316 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:18.316 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.316 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3675708 00:25:18.576 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:18.576 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:18.576 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3675708' 00:25:18.576 killing process with pid 3675708 00:25:18.576 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3675708 00:25:18.576 Received shutdown signal, test time was about 10.000000 seconds 00:25:18.576 00:25:18.576 Latency(us) 00:25:18.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.577 =================================================================================================================== 00:25:18.577 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.577 [2024-07-22 20:32:30.345320] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:18.577 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3675708 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RWEKb82mpj 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RWEKb82mpj 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RWEKb82mpj 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RWEKb82mpj' 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3678049 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3678049 /var/tmp/bdevperf.sock 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3678049 ']' 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:18.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.858 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:19.119 [2024-07-22 20:32:30.944516] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:19.119 [2024-07-22 20:32:30.944628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678049 ] 00:25:19.119 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.119 [2024-07-22 20:32:31.040109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.379 [2024-07-22 20:32:31.173869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.950 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.950 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:19.950 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RWEKb82mpj 00:25:19.950 [2024-07-22 20:32:31.816994] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:19.950 [2024-07-22 20:32:31.817084] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:19.950 [2024-07-22 20:32:31.829303] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:19.950 [2024-07-22 20:32:31.829418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (107): Transport endpoint is not connected 00:25:19.950 [2024-07-22 20:32:31.830380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:25:19.950 [2024-07-22 20:32:31.831375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.950 [2024-07-22 20:32:31.831395] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:19.950 [2024-07-22 20:32:31.831404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.950 request: 00:25:19.950 { 00:25:19.950 "name": "TLSTEST", 00:25:19.950 "trtype": "tcp", 00:25:19.950 "traddr": "10.0.0.2", 00:25:19.950 "adrfam": "ipv4", 00:25:19.950 "trsvcid": "4420", 00:25:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:19.951 "prchk_reftag": false, 00:25:19.951 "prchk_guard": false, 00:25:19.951 "hdgst": false, 00:25:19.951 "ddgst": false, 00:25:19.951 "psk": "/tmp/tmp.RWEKb82mpj", 00:25:19.951 "method": "bdev_nvme_attach_controller", 00:25:19.951 "req_id": 1 00:25:19.951 } 00:25:19.951 Got JSON-RPC error response 00:25:19.951 response: 00:25:19.951 { 00:25:19.951 "code": -5, 00:25:19.951 "message": "Input/output error" 00:25:19.951 } 00:25:19.951 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3678049 00:25:19.951 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3678049 ']' 00:25:19.951 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3678049 00:25:19.951 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:19.951 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.951 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3678049 00:25:19.951 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:19.951 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:19.951 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3678049' 00:25:19.951 killing process with pid 3678049 00:25:19.951 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3678049 00:25:19.951 Received shutdown signal, test time was about 10.000000 seconds 00:25:19.951 00:25:19.951 Latency(us) 00:25:19.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.951 =================================================================================================================== 00:25:19.951 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:19.951 [2024-07-22 20:32:31.890243] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:19.951 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3678049 00:25:20.552 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:20.552 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:20.552 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:20.552 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:20.552 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:20.552 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GmaTotIEUl 00:25:20.552 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:20.552 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GmaTotIEUl 00:25:20.552 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:20.552 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GmaTotIEUl 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GmaTotIEUl' 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3678393 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3678393 /var/tmp/bdevperf.sock 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3678393 ']' 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:20.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.553 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.553 [2024-07-22 20:32:32.473054] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:20.553 [2024-07-22 20:32:32.473176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678393 ] 00:25:20.553 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.813 [2024-07-22 20:32:32.568100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.813 [2024-07-22 20:32:32.701078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.386 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.386 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:21.386 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.GmaTotIEUl 00:25:21.386 [2024-07-22 20:32:33.339542] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:21.386 [2024-07-22 20:32:33.339635] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:21.386 [2024-07-22 20:32:33.352768] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:21.386 [2024-07-22 20:32:33.352798] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:21.386 [2024-07-22 20:32:33.352831] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:21.386 [2024-07-22 20:32:33.353049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (107): Transport endpoint is not connected 00:25:21.386 [2024-07-22 20:32:33.354018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:25:21.386 [2024-07-22 20:32:33.355019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.386 [2024-07-22 20:32:33.355035] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:21.386 [2024-07-22 20:32:33.355046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.386 request: 00:25:21.386 { 00:25:21.386 "name": "TLSTEST", 00:25:21.386 "trtype": "tcp", 00:25:21.386 "traddr": "10.0.0.2", 00:25:21.386 "adrfam": "ipv4", 00:25:21.386 "trsvcid": "4420", 00:25:21.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.386 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:21.386 "prchk_reftag": false, 00:25:21.386 "prchk_guard": false, 00:25:21.386 "hdgst": false, 00:25:21.386 "ddgst": false, 00:25:21.386 "psk": "/tmp/tmp.GmaTotIEUl", 00:25:21.386 "method": "bdev_nvme_attach_controller", 00:25:21.386 "req_id": 1 00:25:21.386 } 00:25:21.386 Got JSON-RPC error response 00:25:21.386 response: 00:25:21.386 { 00:25:21.386 "code": -5, 00:25:21.386 "message": "Input/output error" 00:25:21.386 } 00:25:21.386 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3678393 00:25:21.386 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3678393 ']' 00:25:21.386 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3678393 00:25:21.386 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:21.386 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.386 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3678393 00:25:21.648 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:21.648 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:21.648 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3678393' 00:25:21.648 killing process with pid 3678393 00:25:21.648 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3678393 00:25:21.648 Received shutdown signal, test time was about 10.000000 seconds 00:25:21.648 00:25:21.648 Latency(us) 00:25:21.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.648 =================================================================================================================== 00:25:21.648 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:21.648 [2024-07-22 20:32:33.425231] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:21.648 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3678393 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GmaTotIEUl 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GmaTotIEUl 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GmaTotIEUl 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GmaTotIEUl' 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3678672 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3678672 /var/tmp/bdevperf.sock 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3678672 ']' 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.909 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:22.169 [2024-07-22 20:32:34.009758] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:22.169 [2024-07-22 20:32:34.009873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678672 ] 00:25:22.169 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.169 [2024-07-22 20:32:34.106732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.430 [2024-07-22 20:32:34.240761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.001 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.001 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:23.001 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GmaTotIEUl 00:25:23.001 [2024-07-22 20:32:34.875641] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:23.001 [2024-07-22 20:32:34.875729] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:23.001 [2024-07-22 20:32:34.886541] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:23.001 [2024-07-22 20:32:34.886567] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:23.001 [2024-07-22 20:32:34.886595] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:23.001 [2024-07-22 20:32:34.887078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (107): Transport endpoint is not connected 00:25:23.001 [2024-07-22 20:32:34.888063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:25:23.001 [2024-07-22 20:32:34.889067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:23.001 [2024-07-22 20:32:34.889085] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:23.001 [2024-07-22 20:32:34.889096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:23.001 request: 00:25:23.001 { 00:25:23.001 "name": "TLSTEST", 00:25:23.001 "trtype": "tcp", 00:25:23.001 "traddr": "10.0.0.2", 00:25:23.001 "adrfam": "ipv4", 00:25:23.001 "trsvcid": "4420", 00:25:23.001 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:23.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:23.001 "prchk_reftag": false, 00:25:23.001 "prchk_guard": false, 00:25:23.001 "hdgst": false, 00:25:23.001 "ddgst": false, 00:25:23.001 "psk": "/tmp/tmp.GmaTotIEUl", 00:25:23.002 "method": "bdev_nvme_attach_controller", 00:25:23.002 "req_id": 1 00:25:23.002 } 00:25:23.002 Got JSON-RPC error response 00:25:23.002 response: 00:25:23.002 { 00:25:23.002 "code": -5, 00:25:23.002 "message": "Input/output error" 00:25:23.002 } 00:25:23.002 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3678672 00:25:23.002 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3678672 ']' 00:25:23.002 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3678672 00:25:23.002 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:23.002 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:23.002 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3678672 00:25:23.002 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:23.002 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:23.002 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3678672' 00:25:23.002 killing process with pid 3678672 00:25:23.002 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3678672 00:25:23.002 Received shutdown signal, test time was about 10.000000 seconds 00:25:23.002 00:25:23.002 Latency(us) 00:25:23.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.002 =================================================================================================================== 00:25:23.002 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:23.002 [2024-07-22 20:32:34.976500] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:23.002 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3678672 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3678937 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3678937 /var/tmp/bdevperf.sock 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3678937 ']' 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:23.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.573 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.573 [2024-07-22 20:32:35.551047] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:23.573 [2024-07-22 20:32:35.551158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678937 ] 00:25:23.834 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.834 [2024-07-22 20:32:35.648277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.834 [2024-07-22 20:32:35.785335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.404 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:24.404 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:24.404 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:24.664 [2024-07-22 20:32:36.431331] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:24.664 [2024-07-22 20:32:36.433079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388180 (9): Bad file descriptor 00:25:24.664 [2024-07-22 20:32:36.434073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:24.664 [2024-07-22 20:32:36.434091] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:24.664 [2024-07-22 20:32:36.434101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:24.664 request: 00:25:24.664 { 00:25:24.664 "name": "TLSTEST", 00:25:24.664 "trtype": "tcp", 00:25:24.664 "traddr": "10.0.0.2", 00:25:24.664 "adrfam": "ipv4", 00:25:24.664 "trsvcid": "4420", 00:25:24.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.664 "prchk_reftag": false, 00:25:24.664 "prchk_guard": false, 00:25:24.664 "hdgst": false, 00:25:24.664 "ddgst": false, 00:25:24.664 "method": "bdev_nvme_attach_controller", 00:25:24.664 "req_id": 1 00:25:24.664 } 00:25:24.664 Got JSON-RPC error response 00:25:24.664 response: 00:25:24.664 { 00:25:24.664 "code": -5, 00:25:24.664 "message": "Input/output error" 00:25:24.664 } 00:25:24.664 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3678937 00:25:24.664 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3678937 ']' 00:25:24.664 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3678937 00:25:24.664 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:24.664 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:24.664 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3678937 00:25:24.664 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:24.664 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:24.664 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3678937' 00:25:24.664 killing process with pid 3678937 00:25:24.664 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3678937 00:25:24.664 Received shutdown signal, test time was about 10.000000 seconds 00:25:24.664 00:25:24.664 Latency(us) 00:25:24.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.664 =================================================================================================================== 00:25:24.664 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:24.664 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3678937 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 3672969 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3672969 ']' 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3672969 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3672969 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3672969' 00:25:25.235 killing process with pid 3672969 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3672969 00:25:25.235 [2024-07-22 20:32:37.069212] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:25.235 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3672969 00:25:25.806 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:25.806 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:25.806 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:25.806 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:25.806 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:25.806 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:25:25.806 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.5ZvUEt5CLi 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.5ZvUEt5CLi 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3679444 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3679444 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3679444 ']' 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:26.067 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 [2024-07-22 20:32:37.944363] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:26.067 [2024-07-22 20:32:37.944464] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.067 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.067 [2024-07-22 20:32:38.075881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.328 [2024-07-22 20:32:38.210513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.328 [2024-07-22 20:32:38.210555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.328 [2024-07-22 20:32:38.210564] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.328 [2024-07-22 20:32:38.210571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.328 [2024-07-22 20:32:38.210580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.328 [2024-07-22 20:32:38.210602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.899 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:26.899 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:26.899 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:26.899 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:26.899 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.899 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.899 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.5ZvUEt5CLi 00:25:26.899 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5ZvUEt5CLi 00:25:26.899 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:26.899 [2024-07-22 20:32:38.850622] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.899 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:27.160 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:27.160 [2024-07-22 20:32:39.147354] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:27.160 [2024-07-22 20:32:39.147578] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.160 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:27.421 malloc0 00:25:27.421 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5ZvUEt5CLi 00:25:27.681 [2024-07-22 20:32:39.633431] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ZvUEt5CLi 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5ZvUEt5CLi' 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3679802 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3679802 /var/tmp/bdevperf.sock 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3679802 ']' 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.681 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:27.941 [2024-07-22 20:32:39.723357] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:27.941 [2024-07-22 20:32:39.723471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679802 ] 00:25:27.941 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.941 [2024-07-22 20:32:39.818549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.941 [2024-07-22 20:32:39.952633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.513 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.513 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:28.513 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5ZvUEt5CLi 00:25:28.774 [2024-07-22 20:32:40.582638] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:28.774 [2024-07-22 20:32:40.582730] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:28.774 TLSTESTn1 00:25:28.774 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:28.774 Running I/O for 10 seconds... 00:25:41.008 00:25:41.008 Latency(us) 00:25:41.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.008 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:41.008 Verification LBA range: start 0x0 length 0x2000 00:25:41.008 TLSTESTn1 : 10.02 5160.04 20.16 0.00 0.00 24762.66 7208.96 37792.43 00:25:41.008 =================================================================================================================== 00:25:41.008 Total : 5160.04 20.16 0.00 0.00 24762.66 7208.96 37792.43 00:25:41.008 0 00:25:41.008 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.008 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3679802 00:25:41.008 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3679802 ']' 00:25:41.008 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3679802 00:25:41.008 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:41.008 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:41.008 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3679802 00:25:41.008 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:41.008 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:41.008 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3679802' 00:25:41.008 killing process with pid 3679802 00:25:41.008 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3679802 00:25:41.008 Received shutdown signal, test time was about 10.000000 seconds 00:25:41.009 00:25:41.009 Latency(us) 00:25:41.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.009 =================================================================================================================== 00:25:41.009 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.009 [2024-07-22 20:32:50.903974] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:41.009 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3679802 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.5ZvUEt5CLi 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ZvUEt5CLi 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ZvUEt5CLi 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ZvUEt5CLi 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5ZvUEt5CLi' 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3682088 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3682088 /var/tmp/bdevperf.sock 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3682088 ']' 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.009 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:41.009 [2024-07-22 20:32:51.515064] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:41.009 [2024-07-22 20:32:51.515180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682088 ] 00:25:41.009 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.009 [2024-07-22 20:32:51.611115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.009 [2024-07-22 20:32:51.745608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5ZvUEt5CLi 00:25:41.009 [2024-07-22 20:32:52.404321] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:41.009 [2024-07-22 20:32:52.404371] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:41.009 [2024-07-22 20:32:52.404382] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.5ZvUEt5CLi 00:25:41.009 request: 00:25:41.009 { 00:25:41.009 "name": "TLSTEST", 00:25:41.009 "trtype": "tcp", 00:25:41.009 "traddr": "10.0.0.2", 00:25:41.009 "adrfam": "ipv4", 00:25:41.009 "trsvcid": "4420", 00:25:41.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:41.009 "prchk_reftag": false, 00:25:41.009 "prchk_guard": false, 00:25:41.009 "hdgst": false, 00:25:41.009 "ddgst": false, 00:25:41.009 "psk": "/tmp/tmp.5ZvUEt5CLi", 00:25:41.009 "method": "bdev_nvme_attach_controller", 00:25:41.009 "req_id": 1 00:25:41.009 } 00:25:41.009 Got JSON-RPC error response 00:25:41.009 response: 00:25:41.009 { 00:25:41.009 "code": -1, 00:25:41.009 "message": "Operation not permitted" 00:25:41.009 } 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3682088 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3682088 ']' 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3682088 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3682088 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3682088' 00:25:41.009 killing process with pid 3682088 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3682088 00:25:41.009 Received shutdown signal, test time was about 10.000000 seconds 00:25:41.009 00:25:41.009 Latency(us) 00:25:41.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.009 =================================================================================================================== 00:25:41.009 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3682088 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 3679444 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3679444 ']' 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3679444 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:41.009 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3679444 00:25:41.009 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:41.009 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:41.009 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3679444' 00:25:41.009 killing process with pid 3679444 00:25:41.009 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3679444 00:25:41.009 [2024-07-22 20:32:53.025267] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:41.009 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3679444 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3682493 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3682493 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3682493 ']' 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.951 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:41.951 [2024-07-22 20:32:53.815916] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:41.951 [2024-07-22 20:32:53.816031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.952 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.952 [2024-07-22 20:32:53.955360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.212 [2024-07-22 20:32:54.098634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.212 [2024-07-22 20:32:54.098671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.212 [2024-07-22 20:32:54.098681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.212 [2024-07-22 20:32:54.098687] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.212 [2024-07-22 20:32:54.098695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.212 [2024-07-22 20:32:54.098722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.5ZvUEt5CLi 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.5ZvUEt5CLi 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.5ZvUEt5CLi 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5ZvUEt5CLi 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:42.784 [2024-07-22 20:32:54.734031] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.784 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:43.046 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:43.046 [2024-07-22 20:32:55.046822] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:43.046 [2024-07-22 20:32:55.047035] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.046 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:43.308 malloc0 00:25:43.308 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:43.569 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5ZvUEt5CLi 00:25:43.569 [2024-07-22 20:32:55.538742] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:43.569 [2024-07-22 20:32:55.538775] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:25:43.569 [2024-07-22 20:32:55.538795] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:25:43.569 request: 00:25:43.569 { 00:25:43.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:43.569 "host": "nqn.2016-06.io.spdk:host1", 00:25:43.569 "psk": "/tmp/tmp.5ZvUEt5CLi", 00:25:43.569 "method": "nvmf_subsystem_add_host", 00:25:43.569 "req_id": 1 00:25:43.569 } 00:25:43.569 Got JSON-RPC error response 00:25:43.569 response: 00:25:43.569 { 00:25:43.569 "code": -32603, 00:25:43.569 "message": "Internal error" 00:25:43.569 } 00:25:43.569 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:43.569 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:43.569 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:43.569 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:43.569 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 3682493 00:25:43.570 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3682493 ']' 00:25:43.570 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3682493 00:25:43.570 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:43.570 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:43.570 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3682493 00:25:43.831 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:43.831 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:43.831 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3682493' 00:25:43.831 killing process with pid 3682493 00:25:43.831 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3682493 00:25:43.831 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3682493 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.5ZvUEt5CLi 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3682956 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3682956 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3682956 ']' 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:44.403 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:44.403 [2024-07-22 20:32:56.370260] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:44.403 [2024-07-22 20:32:56.370373] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.664 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.664 [2024-07-22 20:32:56.508922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.664 [2024-07-22 20:32:56.652417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.664 [2024-07-22 20:32:56.652456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.664 [2024-07-22 20:32:56.652465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:44.664 [2024-07-22 20:32:56.652472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:44.664 [2024-07-22 20:32:56.652481] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.664 [2024-07-22 20:32:56.652506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.235 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:45.235 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:45.235 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:45.235 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:45.235 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:45.235 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.235 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.5ZvUEt5CLi 00:25:45.235 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5ZvUEt5CLi 00:25:45.236 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:45.496 [2024-07-22 20:32:57.284961] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.496 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:45.496 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:45.757 [2024-07-22 20:32:57.597759] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:45.757 [2024-07-22 20:32:57.597975] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.757 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:46.017 malloc0 00:25:46.017 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:46.017 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5ZvUEt5CLi 00:25:46.281 [2024-07-22 20:32:58.109336] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:46.281 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:46.281 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3683361 00:25:46.281 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:46.281 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3683361 /var/tmp/bdevperf.sock 00:25:46.281 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3683361 ']' 00:25:46.281 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.281 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.281 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.281 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.281 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.281 [2024-07-22 20:32:58.184567] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:46.281 [2024-07-22 20:32:58.184692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3683361 ] 00:25:46.281 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.281 [2024-07-22 20:32:58.282973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.581 [2024-07-22 20:32:58.419837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.152 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.152 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:47.152 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5ZvUEt5CLi 00:25:47.152 [2024-07-22 20:32:59.091054] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:47.152 [2024-07-22 20:32:59.091143] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:47.152 TLSTESTn1 00:25:47.412 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:25:47.673 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:25:47.673 "subsystems": [ 00:25:47.673 { 00:25:47.673 "subsystem": "keyring", 00:25:47.673 "config": [] 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "subsystem": "iobuf", 00:25:47.673 "config": [ 00:25:47.673 { 00:25:47.673 "method": "iobuf_set_options", 00:25:47.673 "params": { 00:25:47.673 "small_pool_count": 8192, 00:25:47.673 "large_pool_count": 1024, 00:25:47.673 "small_bufsize": 8192, 00:25:47.673 "large_bufsize": 135168 00:25:47.673 } 00:25:47.673 } 00:25:47.673 ] 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "subsystem": "sock", 00:25:47.673 "config": [ 00:25:47.673 { 00:25:47.673 "method": "sock_set_default_impl", 00:25:47.673 "params": { 00:25:47.673 "impl_name": "posix" 00:25:47.673 } 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "method": "sock_impl_set_options", 00:25:47.673 "params": { 00:25:47.673 "impl_name": "ssl", 00:25:47.673 "recv_buf_size": 4096, 00:25:47.673 "send_buf_size": 4096, 00:25:47.673 "enable_recv_pipe": true, 00:25:47.673 "enable_quickack": false, 00:25:47.673 "enable_placement_id": 0, 00:25:47.673 "enable_zerocopy_send_server": true, 00:25:47.673 "enable_zerocopy_send_client": false, 00:25:47.673 "zerocopy_threshold": 0, 00:25:47.673 "tls_version": 0, 00:25:47.673 "enable_ktls": false 00:25:47.673 } 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "method": "sock_impl_set_options", 00:25:47.673 "params": { 00:25:47.673 "impl_name": "posix", 00:25:47.673 "recv_buf_size": 2097152, 00:25:47.673 "send_buf_size": 2097152, 00:25:47.673 "enable_recv_pipe": true, 00:25:47.673 "enable_quickack": false, 00:25:47.673 "enable_placement_id": 0, 00:25:47.673 "enable_zerocopy_send_server": true, 00:25:47.673 "enable_zerocopy_send_client": false, 00:25:47.673 "zerocopy_threshold": 0, 00:25:47.673 "tls_version": 0, 00:25:47.673 "enable_ktls": false 00:25:47.673 } 00:25:47.673 } 00:25:47.673 ] 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "subsystem": "vmd", 00:25:47.673 "config": [] 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "subsystem": "accel", 00:25:47.673 "config": [ 00:25:47.673 { 00:25:47.673 "method": "accel_set_options", 00:25:47.673 "params": { 00:25:47.673 "small_cache_size": 128, 00:25:47.673 "large_cache_size": 16, 00:25:47.673 "task_count": 2048, 00:25:47.673 "sequence_count": 2048, 00:25:47.673 "buf_count": 2048 00:25:47.673 } 00:25:47.673 } 00:25:47.673 ] 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "subsystem": "bdev", 00:25:47.673 "config": [ 00:25:47.673 { 00:25:47.673 "method": "bdev_set_options", 00:25:47.673 "params": { 00:25:47.673 "bdev_io_pool_size": 65535, 00:25:47.673 "bdev_io_cache_size": 256, 00:25:47.673 "bdev_auto_examine": true, 00:25:47.673 "iobuf_small_cache_size": 128, 00:25:47.673 "iobuf_large_cache_size": 16 00:25:47.673 } 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "method": "bdev_raid_set_options", 00:25:47.673 "params": { 00:25:47.673 "process_window_size_kb": 1024, 00:25:47.673 "process_max_bandwidth_mb_sec": 0 00:25:47.673 } 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "method": "bdev_iscsi_set_options", 00:25:47.673 "params": { 00:25:47.673 "timeout_sec": 30 00:25:47.673 } 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "method": "bdev_nvme_set_options", 00:25:47.673 "params": { 00:25:47.673 "action_on_timeout": "none", 00:25:47.673 "timeout_us": 0, 00:25:47.673 "timeout_admin_us": 0, 00:25:47.673 "keep_alive_timeout_ms": 10000, 00:25:47.673 "arbitration_burst": 0, 00:25:47.673 "low_priority_weight": 0, 00:25:47.673 "medium_priority_weight": 0, 00:25:47.673 "high_priority_weight": 0, 00:25:47.673 "nvme_adminq_poll_period_us": 10000, 00:25:47.673 "nvme_ioq_poll_period_us": 0, 00:25:47.673 "io_queue_requests": 0, 00:25:47.673 "delay_cmd_submit": true, 00:25:47.673 "transport_retry_count": 4, 00:25:47.673 "bdev_retry_count": 3, 00:25:47.673 "transport_ack_timeout": 0, 00:25:47.673 "ctrlr_loss_timeout_sec": 0, 00:25:47.673 "reconnect_delay_sec": 0, 00:25:47.673 "fast_io_fail_timeout_sec": 0, 00:25:47.673 "disable_auto_failback": false, 00:25:47.673 "generate_uuids": false, 00:25:47.673 "transport_tos": 0, 00:25:47.673 "nvme_error_stat": false, 00:25:47.673 "rdma_srq_size": 0, 00:25:47.673 "io_path_stat": false, 00:25:47.673 "allow_accel_sequence": false, 00:25:47.673 "rdma_max_cq_size": 0, 00:25:47.673 "rdma_cm_event_timeout_ms": 0, 00:25:47.673 "dhchap_digests": [ 00:25:47.673 "sha256", 00:25:47.673 "sha384", 00:25:47.673 "sha512" 00:25:47.673 ], 00:25:47.673 "dhchap_dhgroups": [ 00:25:47.673 "null", 00:25:47.673 "ffdhe2048", 00:25:47.673 "ffdhe3072", 00:25:47.673 "ffdhe4096", 00:25:47.673 "ffdhe6144", 00:25:47.673 "ffdhe8192" 00:25:47.673 ] 00:25:47.673 } 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "method": "bdev_nvme_set_hotplug", 00:25:47.673 "params": { 00:25:47.673 "period_us": 100000, 00:25:47.673 "enable": false 00:25:47.673 } 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "method": "bdev_malloc_create", 00:25:47.673 "params": { 00:25:47.673 "name": "malloc0", 00:25:47.673 "num_blocks": 8192, 00:25:47.673 "block_size": 4096, 00:25:47.673 "physical_block_size": 4096, 00:25:47.673 "uuid": "b649948f-30ae-428e-a86e-3358e1db7bcb", 00:25:47.673 "optimal_io_boundary": 0, 00:25:47.673 "md_size": 0, 00:25:47.673 "dif_type": 0, 00:25:47.673 "dif_is_head_of_md": false, 00:25:47.673 "dif_pi_format": 0 00:25:47.673 } 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "method": "bdev_wait_for_examine" 00:25:47.673 } 00:25:47.673 ] 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "subsystem": "nbd", 00:25:47.673 "config": [] 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "subsystem": "scheduler", 00:25:47.673 "config": [ 00:25:47.673 { 00:25:47.673 "method": "framework_set_scheduler", 00:25:47.673 "params": { 00:25:47.673 "name": "static" 00:25:47.673 } 00:25:47.673 } 00:25:47.673 ] 00:25:47.673 }, 00:25:47.673 { 00:25:47.673 "subsystem": "nvmf", 00:25:47.673 "config": [ 00:25:47.673 { 00:25:47.673 "method": "nvmf_set_config", 00:25:47.673 "params": { 00:25:47.674 "discovery_filter": "match_any", 00:25:47.674 "admin_cmd_passthru": { 00:25:47.674 "identify_ctrlr": false 00:25:47.674 } 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "nvmf_set_max_subsystems", 00:25:47.674 "params": { 00:25:47.674 "max_subsystems": 1024 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "nvmf_set_crdt", 00:25:47.674 "params": { 00:25:47.674 "crdt1": 0, 00:25:47.674 "crdt2": 0, 00:25:47.674 "crdt3": 0 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "nvmf_create_transport", 00:25:47.674 "params": { 00:25:47.674 "trtype": "TCP", 00:25:47.674 "max_queue_depth": 128, 00:25:47.674 "max_io_qpairs_per_ctrlr": 127, 00:25:47.674 "in_capsule_data_size": 4096, 00:25:47.674 "max_io_size": 131072, 00:25:47.674 "io_unit_size": 131072, 00:25:47.674 "max_aq_depth": 128, 00:25:47.674 "num_shared_buffers": 511, 00:25:47.674 "buf_cache_size": 4294967295, 00:25:47.674 "dif_insert_or_strip": false, 00:25:47.674 "zcopy": false, 00:25:47.674 "c2h_success": false, 00:25:47.674 "sock_priority": 0, 00:25:47.674 "abort_timeout_sec": 1, 00:25:47.674 "ack_timeout": 0, 00:25:47.674 "data_wr_pool_size": 0 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "nvmf_create_subsystem", 00:25:47.674 "params": { 00:25:47.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.674 "allow_any_host": false, 00:25:47.674 "serial_number": "SPDK00000000000001", 00:25:47.674 "model_number": "SPDK bdev Controller", 00:25:47.674 "max_namespaces": 10, 00:25:47.674 "min_cntlid": 1, 00:25:47.674 "max_cntlid": 65519, 00:25:47.674 "ana_reporting": false 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "nvmf_subsystem_add_host", 00:25:47.674 "params": { 00:25:47.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.674 "host": "nqn.2016-06.io.spdk:host1", 00:25:47.674 "psk": "/tmp/tmp.5ZvUEt5CLi" 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "nvmf_subsystem_add_ns", 00:25:47.674 "params": { 00:25:47.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.674 "namespace": { 00:25:47.674 "nsid": 1, 00:25:47.674 "bdev_name": "malloc0", 00:25:47.674 "nguid": "B649948F30AE428EA86E3358E1DB7BCB", 00:25:47.674 "uuid": "b649948f-30ae-428e-a86e-3358e1db7bcb", 00:25:47.674 "no_auto_visible": false 00:25:47.674 } 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "nvmf_subsystem_add_listener", 00:25:47.674 "params": { 00:25:47.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.674 "listen_address": { 00:25:47.674 "trtype": "TCP", 00:25:47.674 "adrfam": "IPv4", 00:25:47.674 "traddr": "10.0.0.2", 00:25:47.674 "trsvcid": "4420" 00:25:47.674 }, 00:25:47.674 "secure_channel": true 00:25:47.674 } 00:25:47.674 } 00:25:47.674 ] 00:25:47.674 } 00:25:47.674 ] 00:25:47.674 }' 00:25:47.674 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:47.674 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:25:47.674 "subsystems": [ 00:25:47.674 { 00:25:47.674 "subsystem": "keyring", 00:25:47.674 "config": [] 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "subsystem": "iobuf", 00:25:47.674 "config": [ 00:25:47.674 { 00:25:47.674 "method": "iobuf_set_options", 00:25:47.674 "params": { 00:25:47.674 "small_pool_count": 8192, 00:25:47.674 "large_pool_count": 1024, 00:25:47.674 "small_bufsize": 8192, 00:25:47.674 "large_bufsize": 135168 00:25:47.674 } 00:25:47.674 } 00:25:47.674 ] 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "subsystem": "sock", 00:25:47.674 "config": [ 00:25:47.674 { 00:25:47.674 "method": "sock_set_default_impl", 00:25:47.674 "params": { 00:25:47.674 "impl_name": "posix" 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "sock_impl_set_options", 00:25:47.674 "params": { 00:25:47.674 "impl_name": "ssl", 00:25:47.674 "recv_buf_size": 4096, 00:25:47.674 "send_buf_size": 4096, 00:25:47.674 "enable_recv_pipe": true, 00:25:47.674 "enable_quickack": false, 00:25:47.674 "enable_placement_id": 0, 00:25:47.674 "enable_zerocopy_send_server": true, 00:25:47.674 "enable_zerocopy_send_client": false, 00:25:47.674 "zerocopy_threshold": 0, 00:25:47.674 "tls_version": 0, 00:25:47.674 "enable_ktls": false 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "sock_impl_set_options", 00:25:47.674 "params": { 00:25:47.674 "impl_name": "posix", 00:25:47.674 "recv_buf_size": 2097152, 00:25:47.674 "send_buf_size": 2097152, 00:25:47.674 "enable_recv_pipe": true, 00:25:47.674 "enable_quickack": false, 00:25:47.674 "enable_placement_id": 0, 00:25:47.674 "enable_zerocopy_send_server": true, 00:25:47.674 "enable_zerocopy_send_client": false, 00:25:47.674 "zerocopy_threshold": 0, 00:25:47.674 "tls_version": 0, 00:25:47.674 "enable_ktls": false 00:25:47.674 } 00:25:47.674 } 00:25:47.674 ] 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "subsystem": "vmd", 00:25:47.674 "config": [] 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "subsystem": "accel", 00:25:47.674 "config": [ 00:25:47.674 { 00:25:47.674 "method": "accel_set_options", 00:25:47.674 "params": { 00:25:47.674 "small_cache_size": 128, 00:25:47.674 "large_cache_size": 16, 00:25:47.674 "task_count": 2048, 00:25:47.674 "sequence_count": 2048, 00:25:47.674 "buf_count": 2048 00:25:47.674 } 00:25:47.674 } 00:25:47.674 ] 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "subsystem": "bdev", 00:25:47.674 "config": [ 00:25:47.674 { 00:25:47.674 "method": "bdev_set_options", 00:25:47.674 "params": { 00:25:47.674 "bdev_io_pool_size": 65535, 00:25:47.674 "bdev_io_cache_size": 256, 00:25:47.674 "bdev_auto_examine": true, 00:25:47.674 "iobuf_small_cache_size": 128, 00:25:47.674 "iobuf_large_cache_size": 16 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "bdev_raid_set_options", 00:25:47.674 "params": { 00:25:47.674 "process_window_size_kb": 1024, 00:25:47.674 "process_max_bandwidth_mb_sec": 0 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "bdev_iscsi_set_options", 00:25:47.674 "params": { 00:25:47.674 "timeout_sec": 30 00:25:47.674 } 00:25:47.674 }, 00:25:47.674 { 00:25:47.674 "method": "bdev_nvme_set_options", 00:25:47.674 "params": { 00:25:47.674 "action_on_timeout": "none", 00:25:47.674 "timeout_us": 0, 00:25:47.674 "timeout_admin_us": 0, 00:25:47.674 "keep_alive_timeout_ms": 10000, 00:25:47.674 "arbitration_burst": 0, 00:25:47.674 "low_priority_weight": 0, 00:25:47.674 "medium_priority_weight": 0, 00:25:47.674 "high_priority_weight": 0, 00:25:47.674 "nvme_adminq_poll_period_us": 10000, 00:25:47.674 "nvme_ioq_poll_period_us": 0, 00:25:47.674 "io_queue_requests": 512, 00:25:47.674 "delay_cmd_submit": true, 00:25:47.674 "transport_retry_count": 4, 00:25:47.674 "bdev_retry_count": 3, 00:25:47.674 "transport_ack_timeout": 0, 00:25:47.674 "ctrlr_loss_timeout_sec": 0, 00:25:47.674 "reconnect_delay_sec": 0, 00:25:47.674 "fast_io_fail_timeout_sec": 0, 00:25:47.674 "disable_auto_failback": false, 00:25:47.674 "generate_uuids": false, 00:25:47.674 "transport_tos": 0, 00:25:47.674 "nvme_error_stat": false, 00:25:47.674 "rdma_srq_size": 0, 00:25:47.674 "io_path_stat": false, 00:25:47.674 "allow_accel_sequence": false, 00:25:47.674 "rdma_max_cq_size": 0, 00:25:47.674 "rdma_cm_event_timeout_ms": 0, 00:25:47.674 "dhchap_digests": [ 00:25:47.674 "sha256", 00:25:47.674 "sha384", 00:25:47.675 "sha512" 00:25:47.675 ], 00:25:47.675 "dhchap_dhgroups": [ 00:25:47.675 "null", 00:25:47.675 "ffdhe2048", 00:25:47.675 "ffdhe3072", 00:25:47.675 "ffdhe4096", 00:25:47.675 "ffdhe6144", 00:25:47.675 "ffdhe8192" 00:25:47.675 ] 00:25:47.675 } 00:25:47.675 }, 00:25:47.675 { 00:25:47.675 "method": "bdev_nvme_attach_controller", 00:25:47.675 "params": { 00:25:47.675 "name": "TLSTEST", 00:25:47.675 "trtype": "TCP", 00:25:47.675 "adrfam": "IPv4", 00:25:47.675 "traddr": "10.0.0.2", 00:25:47.675 "trsvcid": "4420", 00:25:47.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.675 "prchk_reftag": false, 00:25:47.675 "prchk_guard": false, 00:25:47.675 "ctrlr_loss_timeout_sec": 0, 00:25:47.675 "reconnect_delay_sec": 0, 00:25:47.675 "fast_io_fail_timeout_sec": 0, 00:25:47.675 "psk": "/tmp/tmp.5ZvUEt5CLi", 00:25:47.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:47.675 "hdgst": false, 00:25:47.675 "ddgst": false 00:25:47.675 } 00:25:47.675 }, 00:25:47.675 { 00:25:47.675 "method": "bdev_nvme_set_hotplug", 00:25:47.675 "params": { 00:25:47.675 "period_us": 100000, 00:25:47.675 "enable": false 00:25:47.675 } 00:25:47.675 }, 00:25:47.675 { 00:25:47.675 "method": "bdev_wait_for_examine" 00:25:47.675 } 00:25:47.675 ] 00:25:47.675 }, 00:25:47.675 { 00:25:47.675 "subsystem": "nbd", 00:25:47.675 "config": [] 00:25:47.675 } 00:25:47.675 ] 00:25:47.675 }' 00:25:47.675 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 3683361 00:25:47.675 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3683361 ']' 00:25:47.675 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3683361 00:25:47.675 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:47.675 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.675 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3683361 00:25:47.935 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:47.935 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:47.935 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3683361' 00:25:47.935 killing process with pid 3683361 00:25:47.935 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3683361 00:25:47.935 Received shutdown signal, test time was about 10.000000 seconds 00:25:47.935 00:25:47.935 Latency(us) 00:25:47.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.935 =================================================================================================================== 00:25:47.935 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:47.935 [2024-07-22 20:32:59.726987] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:47.935 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3683361 00:25:48.506 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 3682956 00:25:48.506 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3682956 ']' 00:25:48.506 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3682956 00:25:48.506 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:48.506 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:48.506 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3682956 00:25:48.506 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:48.506 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:48.506 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3682956' 00:25:48.506 killing process with pid 3682956 00:25:48.506 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3682956 00:25:48.506 [2024-07-22 20:33:00.287979] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:48.506 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3682956 00:25:49.133 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:49.133 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:49.133 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.133 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.133 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:25:49.133 "subsystems": [ 00:25:49.133 { 00:25:49.133 "subsystem": "keyring", 00:25:49.133 "config": [] 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "subsystem": "iobuf", 00:25:49.133 "config": [ 00:25:49.133 { 00:25:49.133 "method": "iobuf_set_options", 00:25:49.133 "params": { 00:25:49.133 "small_pool_count": 8192, 00:25:49.133 "large_pool_count": 1024, 00:25:49.133 "small_bufsize": 8192, 00:25:49.133 "large_bufsize": 135168 00:25:49.133 } 00:25:49.133 } 00:25:49.133 ] 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "subsystem": "sock", 00:25:49.133 "config": [ 00:25:49.133 { 00:25:49.133 "method": "sock_set_default_impl", 00:25:49.133 "params": { 00:25:49.133 "impl_name": "posix" 00:25:49.133 } 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "method": "sock_impl_set_options", 00:25:49.133 "params": { 00:25:49.133 "impl_name": "ssl", 00:25:49.133 "recv_buf_size": 4096, 00:25:49.133 "send_buf_size": 4096, 00:25:49.133 "enable_recv_pipe": true, 00:25:49.133 "enable_quickack": false, 00:25:49.133 "enable_placement_id": 0, 00:25:49.133 "enable_zerocopy_send_server": true, 00:25:49.133 "enable_zerocopy_send_client": false, 00:25:49.133 "zerocopy_threshold": 0, 00:25:49.133 "tls_version": 0, 00:25:49.133 "enable_ktls": false 00:25:49.133 } 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "method": "sock_impl_set_options", 00:25:49.133 "params": { 00:25:49.133 "impl_name": "posix", 00:25:49.133 "recv_buf_size": 2097152, 00:25:49.133 "send_buf_size": 2097152, 00:25:49.133 "enable_recv_pipe": true, 00:25:49.133 "enable_quickack": false, 00:25:49.133 "enable_placement_id": 0, 00:25:49.133 "enable_zerocopy_send_server": true, 00:25:49.133 "enable_zerocopy_send_client": false, 00:25:49.133 "zerocopy_threshold": 0, 00:25:49.133 "tls_version": 0, 00:25:49.133 "enable_ktls": false 00:25:49.133 } 00:25:49.133 } 00:25:49.133 ] 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "subsystem": "vmd", 00:25:49.133 "config": [] 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "subsystem": "accel", 00:25:49.133 "config": [ 00:25:49.133 { 00:25:49.133 "method": "accel_set_options", 00:25:49.133 "params": { 00:25:49.133 "small_cache_size": 128, 00:25:49.133 "large_cache_size": 16, 00:25:49.133 "task_count": 2048, 00:25:49.133 "sequence_count": 2048, 00:25:49.133 "buf_count": 2048 00:25:49.133 } 00:25:49.133 } 00:25:49.133 ] 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "subsystem": "bdev", 00:25:49.133 "config": [ 00:25:49.133 { 00:25:49.133 "method": "bdev_set_options", 00:25:49.133 "params": { 00:25:49.133 "bdev_io_pool_size": 65535, 00:25:49.133 "bdev_io_cache_size": 256, 00:25:49.133 "bdev_auto_examine": true, 00:25:49.133 "iobuf_small_cache_size": 128, 00:25:49.133 "iobuf_large_cache_size": 16 00:25:49.133 } 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "method": "bdev_raid_set_options", 00:25:49.133 "params": { 00:25:49.133 "process_window_size_kb": 1024, 00:25:49.133 "process_max_bandwidth_mb_sec": 0 00:25:49.133 } 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "method": "bdev_iscsi_set_options", 00:25:49.133 "params": { 00:25:49.133 "timeout_sec": 30 00:25:49.133 } 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "method": "bdev_nvme_set_options", 00:25:49.133 "params": { 00:25:49.133 "action_on_timeout": "none", 00:25:49.133 "timeout_us": 0, 00:25:49.133 "timeout_admin_us": 0, 00:25:49.133 "keep_alive_timeout_ms": 10000, 00:25:49.133 "arbitration_burst": 0, 00:25:49.133 "low_priority_weight": 0, 00:25:49.133 "medium_priority_weight": 0, 00:25:49.133 "high_priority_weight": 0, 00:25:49.133 "nvme_adminq_poll_period_us": 10000, 00:25:49.133 "nvme_ioq_poll_period_us": 0, 00:25:49.133 "io_queue_requests": 0, 00:25:49.133 "delay_cmd_submit": true, 00:25:49.133 "transport_retry_count": 4, 00:25:49.133 "bdev_retry_count": 3, 00:25:49.133 "transport_ack_timeout": 0, 00:25:49.133 "ctrlr_loss_timeout_sec": 0, 00:25:49.133 "reconnect_delay_sec": 0, 00:25:49.133 "fast_io_fail_timeout_sec": 0, 00:25:49.133 "disable_auto_failback": false, 00:25:49.133 "generate_uuids": false, 00:25:49.133 "transport_tos": 0, 00:25:49.133 "nvme_error_stat": false, 00:25:49.133 "rdma_srq_size": 0, 00:25:49.133 "io_path_stat": false, 00:25:49.133 "allow_accel_sequence": false, 00:25:49.133 "rdma_max_cq_size": 0, 00:25:49.133 "rdma_cm_event_timeout_ms": 0, 00:25:49.133 "dhchap_digests": [ 00:25:49.133 "sha256", 00:25:49.133 "sha384", 00:25:49.133 "sha512" 00:25:49.133 ], 00:25:49.133 "dhchap_dhgroups": [ 00:25:49.133 "null", 00:25:49.133 "ffdhe2048", 00:25:49.133 "ffdhe3072", 00:25:49.133 "ffdhe4096", 00:25:49.133 "ffdhe6144", 00:25:49.133 "ffdhe8192" 00:25:49.133 ] 00:25:49.133 } 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "method": "bdev_nvme_set_hotplug", 00:25:49.133 "params": { 00:25:49.133 "period_us": 100000, 00:25:49.133 "enable": false 00:25:49.133 } 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "method": "bdev_malloc_create", 00:25:49.133 "params": { 00:25:49.133 "name": "malloc0", 00:25:49.133 "num_blocks": 8192, 00:25:49.133 "block_size": 4096, 00:25:49.133 "physical_block_size": 4096, 00:25:49.133 "uuid": "b649948f-30ae-428e-a86e-3358e1db7bcb", 00:25:49.133 "optimal_io_boundary": 0, 00:25:49.133 "md_size": 0, 00:25:49.133 "dif_type": 0, 00:25:49.133 "dif_is_head_of_md": false, 00:25:49.133 "dif_pi_format": 0 00:25:49.133 } 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "method": "bdev_wait_for_examine" 00:25:49.133 } 00:25:49.133 ] 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "subsystem": "nbd", 00:25:49.133 "config": [] 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "subsystem": "scheduler", 00:25:49.133 "config": [ 00:25:49.133 { 00:25:49.133 "method": "framework_set_scheduler", 00:25:49.133 "params": { 00:25:49.133 "name": "static" 00:25:49.133 } 00:25:49.133 } 00:25:49.133 ] 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "subsystem": "nvmf", 00:25:49.133 "config": [ 00:25:49.133 { 00:25:49.133 "method": "nvmf_set_config", 00:25:49.133 "params": { 00:25:49.133 "discovery_filter": "match_any", 00:25:49.133 "admin_cmd_passthru": { 00:25:49.133 "identify_ctrlr": false 00:25:49.133 } 00:25:49.133 } 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "method": "nvmf_set_max_subsystems", 00:25:49.133 "params": { 00:25:49.133 "max_subsystems": 1024 00:25:49.133 } 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "method": "nvmf_set_crdt", 00:25:49.134 "params": { 00:25:49.134 "crdt1": 0, 00:25:49.134 "crdt2": 0, 00:25:49.134 "crdt3": 0 00:25:49.134 } 00:25:49.134 }, 00:25:49.134 { 00:25:49.134 "method": "nvmf_create_transport", 00:25:49.134 "params": { 00:25:49.134 "trtype": "TCP", 00:25:49.134 "max_queue_depth": 128, 00:25:49.134 "max_io_qpairs_per_ctrlr": 127, 00:25:49.134 "in_capsule_data_size": 4096, 00:25:49.134 "max_io_size": 131072, 00:25:49.134 "io_unit_size": 131072, 00:25:49.134 "max_aq_depth": 128, 00:25:49.134 "num_shared_buffers": 511, 00:25:49.134 "buf_cache_size": 4294967295, 00:25:49.134 "dif_insert_or_strip": false, 00:25:49.134 "zcopy": false, 00:25:49.134 "c2h_success": false, 00:25:49.134 "sock_priority": 0, 00:25:49.134 "abort_timeout_sec": 1, 00:25:49.134 "ack_timeout": 0, 00:25:49.134 "data_wr_pool_size": 0 00:25:49.134 } 00:25:49.134 }, 00:25:49.134 { 00:25:49.134 "method": "nvmf_create_subsystem", 00:25:49.134 "params": { 00:25:49.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.134 "allow_any_host": false, 00:25:49.134 "serial_number": "SPDK00000000000001", 00:25:49.134 "model_number": "SPDK bdev Controller", 00:25:49.134 "max_namespaces": 10, 00:25:49.134 "min_cntlid": 1, 00:25:49.134 "max_cntlid": 65519, 00:25:49.134 "ana_reporting": false 00:25:49.134 } 00:25:49.134 }, 00:25:49.134 { 00:25:49.134 "method": "nvmf_subsystem_add_host", 00:25:49.134 "params": { 00:25:49.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.134 "host": "nqn.2016-06.io.spdk:host1", 00:25:49.134 "psk": "/tmp/tmp.5ZvUEt5CLi" 00:25:49.134 } 00:25:49.134 }, 00:25:49.134 { 00:25:49.134 "method": "nvmf_subsystem_add_ns", 00:25:49.134 "params": { 00:25:49.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.134 "namespace": { 00:25:49.134 "nsid": 1, 00:25:49.134 "bdev_name": "malloc0", 00:25:49.134 "nguid": "B649948F30AE428EA86E3358E1DB7BCB", 00:25:49.134 "uuid": "b649948f-30ae-428e-a86e-3358e1db7bcb", 00:25:49.134 "no_auto_visible": false 00:25:49.134 } 00:25:49.134 } 00:25:49.134 }, 00:25:49.134 { 00:25:49.134 "method": "nvmf_subsystem_add_listener", 00:25:49.134 "params": { 00:25:49.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.134 "listen_address": { 00:25:49.134 "trtype": "TCP", 00:25:49.134 "adrfam": "IPv4", 00:25:49.134 "traddr": "10.0.0.2", 00:25:49.134 "trsvcid": "4420" 00:25:49.134 }, 00:25:49.134 "secure_channel": true 00:25:49.134 } 00:25:49.134 } 00:25:49.134 ] 00:25:49.134 } 00:25:49.134 ] 00:25:49.135 }' 00:25:49.135 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3683984 00:25:49.135 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:49.135 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3683984 00:25:49.135 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3683984 ']' 00:25:49.135 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.135 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.135 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.135 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.135 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.135 [2024-07-22 20:33:01.047635] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:49.135 [2024-07-22 20:33:01.047747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.135 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.398 [2024-07-22 20:33:01.186121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.398 [2024-07-22 20:33:01.331793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.398 [2024-07-22 20:33:01.331828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.398 [2024-07-22 20:33:01.331837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.398 [2024-07-22 20:33:01.331844] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.398 [2024-07-22 20:33:01.331852] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.398 [2024-07-22 20:33:01.331922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.658 [2024-07-22 20:33:01.662337] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.658 [2024-07-22 20:33:01.678316] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:49.920 [2024-07-22 20:33:01.694378] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:49.920 [2024-07-22 20:33:01.694576] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.920 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:49.920 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:49.920 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:49.920 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:49.920 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.920 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.920 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3684279 00:25:49.920 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3684279 /var/tmp/bdevperf.sock 00:25:49.920 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3684279 ']' 00:25:49.920 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:25:49.920 "subsystems": [ 00:25:49.920 { 00:25:49.920 "subsystem": "keyring", 00:25:49.920 "config": [] 00:25:49.920 }, 00:25:49.920 { 00:25:49.920 "subsystem": "iobuf", 00:25:49.920 "config": [ 00:25:49.920 { 00:25:49.920 "method": "iobuf_set_options", 00:25:49.920 "params": { 00:25:49.920 "small_pool_count": 8192, 00:25:49.920 "large_pool_count": 1024, 00:25:49.920 "small_bufsize": 8192, 00:25:49.920 "large_bufsize": 135168 00:25:49.920 } 00:25:49.920 } 00:25:49.920 ] 00:25:49.920 }, 00:25:49.920 { 00:25:49.920 "subsystem": "sock", 00:25:49.920 "config": [ 00:25:49.920 { 00:25:49.920 "method": "sock_set_default_impl", 00:25:49.920 "params": { 00:25:49.920 "impl_name": "posix" 00:25:49.920 } 00:25:49.920 }, 00:25:49.920 { 00:25:49.920 "method": "sock_impl_set_options", 00:25:49.920 "params": { 00:25:49.920 "impl_name": "ssl", 00:25:49.920 "recv_buf_size": 4096, 00:25:49.920 "send_buf_size": 4096, 00:25:49.920 "enable_recv_pipe": true, 00:25:49.920 "enable_quickack": false, 00:25:49.920 "enable_placement_id": 0, 00:25:49.920 "enable_zerocopy_send_server": true, 00:25:49.920 "enable_zerocopy_send_client": false, 00:25:49.920 "zerocopy_threshold": 0, 00:25:49.920 "tls_version": 0, 00:25:49.920 "enable_ktls": false 00:25:49.920 } 00:25:49.920 }, 00:25:49.920 { 00:25:49.920 "method": "sock_impl_set_options", 00:25:49.920 "params": { 00:25:49.920 "impl_name": "posix", 00:25:49.920 "recv_buf_size": 2097152, 00:25:49.920 "send_buf_size": 2097152, 00:25:49.920 "enable_recv_pipe": true, 00:25:49.920 "enable_quickack": false, 00:25:49.920 "enable_placement_id": 0, 00:25:49.920 "enable_zerocopy_send_server": true, 00:25:49.920 "enable_zerocopy_send_client": false, 00:25:49.920 "zerocopy_threshold": 0, 00:25:49.920 "tls_version": 0, 00:25:49.920 "enable_ktls": false 00:25:49.920 } 00:25:49.920 } 00:25:49.920 ] 00:25:49.920 }, 00:25:49.920 { 00:25:49.920 "subsystem": "vmd", 00:25:49.920 "config": [] 00:25:49.920 }, 00:25:49.920 { 00:25:49.920 "subsystem": "accel", 00:25:49.920 "config": [ 00:25:49.920 { 00:25:49.920 "method": "accel_set_options", 00:25:49.920 "params": { 00:25:49.920 "small_cache_size": 128, 00:25:49.920 "large_cache_size": 16, 00:25:49.920 "task_count": 2048, 00:25:49.920 "sequence_count": 2048, 00:25:49.920 "buf_count": 2048 00:25:49.920 } 00:25:49.920 } 00:25:49.920 ] 00:25:49.920 }, 00:25:49.920 { 00:25:49.920 "subsystem": "bdev", 00:25:49.920 "config": [ 00:25:49.920 { 00:25:49.920 "method": "bdev_set_options", 00:25:49.920 "params": { 00:25:49.920 "bdev_io_pool_size": 65535, 00:25:49.920 "bdev_io_cache_size": 256, 00:25:49.920 "bdev_auto_examine": true, 00:25:49.920 "iobuf_small_cache_size": 128, 00:25:49.920 "iobuf_large_cache_size": 16 00:25:49.920 } 00:25:49.920 }, 00:25:49.920 { 00:25:49.920 "method": "bdev_raid_set_options", 00:25:49.920 "params": { 00:25:49.920 "process_window_size_kb": 1024, 00:25:49.920 "process_max_bandwidth_mb_sec": 0 00:25:49.920 } 00:25:49.920 }, 00:25:49.920 { 00:25:49.920 "method": "bdev_iscsi_set_options", 00:25:49.920 "params": { 00:25:49.920 "timeout_sec": 30 00:25:49.920 } 00:25:49.920 }, 00:25:49.920 { 00:25:49.920 "method": "bdev_nvme_set_options", 00:25:49.920 "params": { 00:25:49.920 "action_on_timeout": "none", 00:25:49.920 "timeout_us": 0, 00:25:49.920 "timeout_admin_us": 0, 00:25:49.920 "keep_alive_timeout_ms": 10000, 00:25:49.920 "arbitration_burst": 0, 00:25:49.920 "low_priority_weight": 0, 00:25:49.920 "medium_priority_weight": 0, 00:25:49.920 "high_priority_weight": 0, 00:25:49.920 "nvme_adminq_poll_period_us": 10000, 00:25:49.920 "nvme_ioq_poll_period_us": 0, 00:25:49.920 "io_queue_requests": 512, 00:25:49.920 "delay_cmd_submit": true, 00:25:49.920 "transport_retry_count": 4, 00:25:49.920 "bdev_retry_count": 3, 00:25:49.920 "transport_ack_timeout": 0, 00:25:49.920 "ctrlr_loss_timeout_sec": 0, 00:25:49.920 "reconnect_delay_sec": 0, 00:25:49.920 "fast_io_fail_timeout_sec": 0, 00:25:49.920 "disable_auto_failback": false, 00:25:49.920 "generate_uuids": false, 00:25:49.920 "transport_tos": 0, 00:25:49.920 "nvme_error_stat": false, 00:25:49.920 "rdma_srq_size": 0, 00:25:49.920 "io_path_stat": false, 00:25:49.920 "allow_accel_sequence": false, 00:25:49.920 "rdma_max_cq_size": 0, 00:25:49.920 "rdma_cm_event_timeout_ms": 0, 00:25:49.920 "dhchap_digests": [ 00:25:49.920 "sha256", 00:25:49.920 "sha384", 00:25:49.920 "sha512" 00:25:49.920 ], 00:25:49.920 "dhchap_dhgroups": [ 00:25:49.920 "null", 00:25:49.920 "ffdhe2048", 00:25:49.920 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:49.920 "ffdhe3072", 00:25:49.920 "ffdhe4096", 00:25:49.920 "ffdhe6144", 00:25:49.920 "ffdhe8192" 00:25:49.920 ] 00:25:49.920 } 00:25:49.920 }, 00:25:49.920 { 00:25:49.920 "method": "bdev_nvme_attach_controller", 00:25:49.920 "params": { 00:25:49.920 "name": "TLSTEST", 00:25:49.920 "trtype": "TCP", 00:25:49.920 "adrfam": "IPv4", 00:25:49.920 "traddr": "10.0.0.2", 00:25:49.920 "trsvcid": "4420", 00:25:49.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.920 "prchk_reftag": false, 00:25:49.920 "prchk_guard": false, 00:25:49.920 "ctrlr_loss_timeout_sec": 0, 00:25:49.920 "reconnect_delay_sec": 0, 00:25:49.920 "fast_io_fail_timeout_sec": 0, 00:25:49.920 "psk": "/tmp/tmp.5ZvUEt5CLi", 00:25:49.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:49.920 "hdgst": false, 00:25:49.920 "ddgst": false 00:25:49.921 } 00:25:49.921 }, 00:25:49.921 { 00:25:49.921 "method": "bdev_nvme_set_hotplug", 00:25:49.921 "params": { 00:25:49.921 "period_us": 100000, 00:25:49.921 "enable": false 00:25:49.921 } 00:25:49.921 }, 00:25:49.921 { 00:25:49.921 "method": "bdev_wait_for_examine" 00:25:49.921 } 00:25:49.921 ] 00:25:49.921 }, 00:25:49.921 { 00:25:49.921 "subsystem": "nbd", 00:25:49.921 "config": [] 00:25:49.921 } 00:25:49.921 ] 00:25:49.921 }' 00:25:49.921 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.921 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:49.921 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:49.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:49.921 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.921 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.921 [2024-07-22 20:33:01.902062] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:49.921 [2024-07-22 20:33:01.902175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684279 ] 00:25:50.181 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.181 [2024-07-22 20:33:02.004864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.181 [2024-07-22 20:33:02.138925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.441 [2024-07-22 20:33:02.378324] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:50.441 [2024-07-22 20:33:02.378435] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:50.703 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.703 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:50.703 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:50.703 Running I/O for 10 seconds... 00:26:02.946 00:26:02.946 Latency(us) 00:26:02.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.946 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:02.946 Verification LBA range: start 0x0 length 0x2000 00:26:02.946 TLSTESTn1 : 10.03 5131.13 20.04 0.00 0.00 24898.00 6225.92 54831.79 00:26:02.946 =================================================================================================================== 00:26:02.946 Total : 5131.13 20.04 0.00 0.00 24898.00 6225.92 54831.79 00:26:02.946 0 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 3684279 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3684279 ']' 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3684279 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3684279 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3684279' 00:26:02.946 killing process with pid 3684279 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3684279 00:26:02.946 Received shutdown signal, test time was about 10.000000 seconds 00:26:02.946 00:26:02.946 Latency(us) 00:26:02.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.946 =================================================================================================================== 00:26:02.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.946 [2024-07-22 20:33:12.846455] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:02.946 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3684279 00:26:02.946 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 3683984 00:26:02.946 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3683984 ']' 00:26:02.946 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3683984 00:26:02.946 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:02.946 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:02.946 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3683984 00:26:02.946 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:02.946 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:02.946 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3683984' 00:26:02.946 killing process with pid 3683984 00:26:02.946 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3683984 00:26:02.946 [2024-07-22 20:33:13.431360] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:02.946 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3683984 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3687155 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3687155 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3687155 ']' 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.946 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:02.946 [2024-07-22 20:33:14.217919] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:02.947 [2024-07-22 20:33:14.218020] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.947 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.947 [2024-07-22 20:33:14.335491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.947 [2024-07-22 20:33:14.517391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.947 [2024-07-22 20:33:14.517436] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.947 [2024-07-22 20:33:14.517449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.947 [2024-07-22 20:33:14.517458] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.947 [2024-07-22 20:33:14.517470] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.947 [2024-07-22 20:33:14.517498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.947 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.947 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:02.947 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:02.947 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:02.947 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:03.208 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.208 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.5ZvUEt5CLi 00:26:03.208 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5ZvUEt5CLi 00:26:03.208 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:03.208 [2024-07-22 20:33:15.144311] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.208 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:03.471 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:03.471 [2024-07-22 20:33:15.453103] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:03.471 [2024-07-22 20:33:15.453369] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.471 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:03.733 malloc0 00:26:03.733 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:03.995 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5ZvUEt5CLi 00:26:03.995 [2024-07-22 20:33:15.985393] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:03.995 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3687545 00:26:03.995 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:03.995 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:03.995 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3687545 /var/tmp/bdevperf.sock 00:26:03.995 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3687545 ']' 00:26:03.995 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:03.995 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:03.995 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:03.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:03.995 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:03.995 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:04.256 [2024-07-22 20:33:16.074180] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:04.256 [2024-07-22 20:33:16.074293] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3687545 ] 00:26:04.256 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.256 [2024-07-22 20:33:16.195506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.517 [2024-07-22 20:33:16.330274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.090 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:05.090 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:05.090 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ZvUEt5CLi 00:26:05.090 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:05.090 [2024-07-22 20:33:17.070182] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:05.352 nvme0n1 00:26:05.352 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:05.352 Running I/O for 1 seconds... 00:26:06.296 00:26:06.296 Latency(us) 00:26:06.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.296 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:06.296 Verification LBA range: start 0x0 length 0x2000 00:26:06.296 nvme0n1 : 1.05 2596.16 10.14 0.00 0.00 48309.72 6280.53 50025.81 00:26:06.296 =================================================================================================================== 00:26:06.296 Total : 2596.16 10.14 0.00 0.00 48309.72 6280.53 50025.81 00:26:06.296 0 00:26:06.557 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 3687545 00:26:06.557 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3687545 ']' 00:26:06.557 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3687545 00:26:06.557 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:06.557 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:06.557 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3687545 00:26:06.557 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:06.557 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:06.557 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3687545' 00:26:06.557 killing process with pid 3687545 00:26:06.557 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3687545 00:26:06.557 Received shutdown signal, test time was about 1.000000 seconds 00:26:06.557 00:26:06.557 Latency(us) 00:26:06.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.557 =================================================================================================================== 00:26:06.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.557 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3687545 00:26:07.130 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 3687155 00:26:07.130 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3687155 ']' 00:26:07.131 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3687155 00:26:07.131 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:07.131 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:07.131 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3687155 00:26:07.131 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:07.131 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:07.131 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3687155' 00:26:07.131 killing process with pid 3687155 00:26:07.131 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3687155 00:26:07.131 [2024-07-22 20:33:18.943475] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:07.131 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3687155 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3688235 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3688235 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3688235 ']' 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:08.076 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.076 [2024-07-22 20:33:19.986182] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:08.076 [2024-07-22 20:33:19.986293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.076 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.337 [2024-07-22 20:33:20.119087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.337 [2024-07-22 20:33:20.296968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.337 [2024-07-22 20:33:20.297011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.337 [2024-07-22 20:33:20.297026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.337 [2024-07-22 20:33:20.297036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.337 [2024-07-22 20:33:20.297046] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.337 [2024-07-22 20:33:20.297075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.910 [2024-07-22 20:33:20.774250] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.910 malloc0 00:26:08.910 [2024-07-22 20:33:20.834757] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:08.910 [2024-07-22 20:33:20.835004] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3688443 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3688443 /var/tmp/bdevperf.sock 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3688443 ']' 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:08.910 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 [2024-07-22 20:33:20.938410] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:09.171 [2024-07-22 20:33:20.938513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3688443 ] 00:26:09.171 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.171 [2024-07-22 20:33:21.060060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.432 [2024-07-22 20:33:21.195440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.694 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:09.694 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:09.694 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ZvUEt5CLi 00:26:09.957 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:09.957 [2024-07-22 20:33:21.963754] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:10.221 nvme0n1 00:26:10.221 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:10.221 Running I/O for 1 seconds... 00:26:11.607 00:26:11.607 Latency(us) 00:26:11.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.607 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:11.607 Verification LBA range: start 0x0 length 0x2000 00:26:11.607 nvme0n1 : 1.06 2440.44 9.53 0.00 0.00 51088.05 6498.99 60730.03 00:26:11.607 =================================================================================================================== 00:26:11.607 Total : 2440.44 9.53 0.00 0.00 51088.05 6498.99 60730.03 00:26:11.607 0 00:26:11.607 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:26:11.607 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.607 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:11.607 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.607 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:26:11.607 "subsystems": [ 00:26:11.607 { 00:26:11.607 "subsystem": "keyring", 00:26:11.607 "config": [ 00:26:11.607 { 00:26:11.607 "method": "keyring_file_add_key", 00:26:11.607 "params": { 00:26:11.607 "name": "key0", 00:26:11.607 "path": "/tmp/tmp.5ZvUEt5CLi" 00:26:11.607 } 00:26:11.607 } 00:26:11.607 ] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "iobuf", 00:26:11.607 "config": [ 00:26:11.607 { 00:26:11.607 "method": "iobuf_set_options", 00:26:11.607 "params": { 00:26:11.607 "small_pool_count": 8192, 00:26:11.607 "large_pool_count": 1024, 00:26:11.607 "small_bufsize": 8192, 00:26:11.607 "large_bufsize": 135168 00:26:11.607 } 00:26:11.607 } 00:26:11.607 ] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "sock", 00:26:11.607 "config": [ 00:26:11.607 { 00:26:11.607 "method": "sock_set_default_impl", 00:26:11.607 "params": { 00:26:11.607 "impl_name": "posix" 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "sock_impl_set_options", 00:26:11.607 "params": { 00:26:11.607 "impl_name": "ssl", 00:26:11.607 "recv_buf_size": 4096, 00:26:11.607 "send_buf_size": 4096, 00:26:11.607 "enable_recv_pipe": true, 00:26:11.607 "enable_quickack": false, 00:26:11.607 "enable_placement_id": 0, 00:26:11.607 "enable_zerocopy_send_server": true, 00:26:11.607 "enable_zerocopy_send_client": false, 00:26:11.607 "zerocopy_threshold": 0, 00:26:11.607 "tls_version": 0, 00:26:11.607 "enable_ktls": false 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "sock_impl_set_options", 00:26:11.607 "params": { 00:26:11.607 "impl_name": "posix", 00:26:11.607 "recv_buf_size": 2097152, 00:26:11.607 "send_buf_size": 2097152, 00:26:11.607 "enable_recv_pipe": true, 00:26:11.607 "enable_quickack": false, 00:26:11.607 "enable_placement_id": 0, 00:26:11.607 "enable_zerocopy_send_server": true, 00:26:11.607 "enable_zerocopy_send_client": false, 00:26:11.607 "zerocopy_threshold": 0, 00:26:11.607 "tls_version": 0, 00:26:11.607 "enable_ktls": false 00:26:11.607 } 00:26:11.607 } 00:26:11.607 ] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "vmd", 00:26:11.607 "config": [] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "accel", 00:26:11.607 "config": [ 00:26:11.607 { 00:26:11.607 "method": "accel_set_options", 00:26:11.607 "params": { 00:26:11.607 "small_cache_size": 128, 00:26:11.607 "large_cache_size": 16, 00:26:11.607 "task_count": 2048, 00:26:11.607 "sequence_count": 2048, 00:26:11.607 "buf_count": 2048 00:26:11.607 } 00:26:11.607 } 00:26:11.607 ] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "bdev", 00:26:11.607 "config": [ 00:26:11.607 { 00:26:11.607 "method": "bdev_set_options", 00:26:11.607 "params": { 00:26:11.607 "bdev_io_pool_size": 65535, 00:26:11.607 "bdev_io_cache_size": 256, 00:26:11.607 "bdev_auto_examine": true, 00:26:11.607 "iobuf_small_cache_size": 128, 00:26:11.607 "iobuf_large_cache_size": 16 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "bdev_raid_set_options", 00:26:11.607 "params": { 00:26:11.607 "process_window_size_kb": 1024, 00:26:11.607 "process_max_bandwidth_mb_sec": 0 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "bdev_iscsi_set_options", 00:26:11.607 "params": { 00:26:11.607 "timeout_sec": 30 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "bdev_nvme_set_options", 00:26:11.607 "params": { 00:26:11.607 "action_on_timeout": "none", 00:26:11.607 "timeout_us": 0, 00:26:11.607 "timeout_admin_us": 0, 00:26:11.607 "keep_alive_timeout_ms": 10000, 00:26:11.607 "arbitration_burst": 0, 00:26:11.607 "low_priority_weight": 0, 00:26:11.607 "medium_priority_weight": 0, 00:26:11.607 "high_priority_weight": 0, 00:26:11.607 "nvme_adminq_poll_period_us": 10000, 00:26:11.607 "nvme_ioq_poll_period_us": 0, 00:26:11.607 "io_queue_requests": 0, 00:26:11.607 "delay_cmd_submit": true, 00:26:11.607 "transport_retry_count": 4, 00:26:11.607 "bdev_retry_count": 3, 00:26:11.607 "transport_ack_timeout": 0, 00:26:11.607 "ctrlr_loss_timeout_sec": 0, 00:26:11.607 "reconnect_delay_sec": 0, 00:26:11.607 "fast_io_fail_timeout_sec": 0, 00:26:11.607 "disable_auto_failback": false, 00:26:11.607 "generate_uuids": false, 00:26:11.607 "transport_tos": 0, 00:26:11.607 "nvme_error_stat": false, 00:26:11.607 "rdma_srq_size": 0, 00:26:11.607 "io_path_stat": false, 00:26:11.607 "allow_accel_sequence": false, 00:26:11.607 "rdma_max_cq_size": 0, 00:26:11.607 "rdma_cm_event_timeout_ms": 0, 00:26:11.607 "dhchap_digests": [ 00:26:11.607 "sha256", 00:26:11.607 "sha384", 00:26:11.607 "sha512" 00:26:11.607 ], 00:26:11.607 "dhchap_dhgroups": [ 00:26:11.607 "null", 00:26:11.607 "ffdhe2048", 00:26:11.607 "ffdhe3072", 00:26:11.607 "ffdhe4096", 00:26:11.607 "ffdhe6144", 00:26:11.607 "ffdhe8192" 00:26:11.607 ] 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "bdev_nvme_set_hotplug", 00:26:11.607 "params": { 00:26:11.607 "period_us": 100000, 00:26:11.607 "enable": false 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "bdev_malloc_create", 00:26:11.607 "params": { 00:26:11.607 "name": "malloc0", 00:26:11.607 "num_blocks": 8192, 00:26:11.607 "block_size": 4096, 00:26:11.607 "physical_block_size": 4096, 00:26:11.607 "uuid": "dc0fd7dc-a80c-46bf-824e-196f2be3d908", 00:26:11.607 "optimal_io_boundary": 0, 00:26:11.607 "md_size": 0, 00:26:11.607 "dif_type": 0, 00:26:11.607 "dif_is_head_of_md": false, 00:26:11.607 "dif_pi_format": 0 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "bdev_wait_for_examine" 00:26:11.607 } 00:26:11.607 ] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "nbd", 00:26:11.607 "config": [] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "scheduler", 00:26:11.607 "config": [ 00:26:11.607 { 00:26:11.607 "method": "framework_set_scheduler", 00:26:11.607 "params": { 00:26:11.607 "name": "static" 00:26:11.607 } 00:26:11.607 } 00:26:11.607 ] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "nvmf", 00:26:11.607 "config": [ 00:26:11.607 { 00:26:11.607 "method": "nvmf_set_config", 00:26:11.607 "params": { 00:26:11.607 "discovery_filter": "match_any", 00:26:11.607 "admin_cmd_passthru": { 00:26:11.607 "identify_ctrlr": false 00:26:11.607 } 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "nvmf_set_max_subsystems", 00:26:11.607 "params": { 00:26:11.607 "max_subsystems": 1024 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "nvmf_set_crdt", 00:26:11.607 "params": { 00:26:11.607 "crdt1": 0, 00:26:11.607 "crdt2": 0, 00:26:11.607 "crdt3": 0 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "nvmf_create_transport", 00:26:11.607 "params": { 00:26:11.607 "trtype": "TCP", 00:26:11.607 "max_queue_depth": 128, 00:26:11.607 "max_io_qpairs_per_ctrlr": 127, 00:26:11.607 "in_capsule_data_size": 4096, 00:26:11.607 "max_io_size": 131072, 00:26:11.607 "io_unit_size": 131072, 00:26:11.607 "max_aq_depth": 128, 00:26:11.607 "num_shared_buffers": 511, 00:26:11.607 "buf_cache_size": 4294967295, 00:26:11.607 "dif_insert_or_strip": false, 00:26:11.607 "zcopy": false, 00:26:11.607 "c2h_success": false, 00:26:11.607 "sock_priority": 0, 00:26:11.607 "abort_timeout_sec": 1, 00:26:11.607 "ack_timeout": 0, 00:26:11.607 "data_wr_pool_size": 0 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "nvmf_create_subsystem", 00:26:11.607 "params": { 00:26:11.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.607 "allow_any_host": false, 00:26:11.607 "serial_number": "00000000000000000000", 00:26:11.607 "model_number": "SPDK bdev Controller", 00:26:11.607 "max_namespaces": 32, 00:26:11.607 "min_cntlid": 1, 00:26:11.607 "max_cntlid": 65519, 00:26:11.607 "ana_reporting": false 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "nvmf_subsystem_add_host", 00:26:11.607 "params": { 00:26:11.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.607 "host": "nqn.2016-06.io.spdk:host1", 00:26:11.607 "psk": "key0" 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "nvmf_subsystem_add_ns", 00:26:11.607 "params": { 00:26:11.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.607 "namespace": { 00:26:11.607 "nsid": 1, 00:26:11.607 "bdev_name": "malloc0", 00:26:11.607 "nguid": "DC0FD7DCA80C46BF824E196F2BE3D908", 00:26:11.607 "uuid": "dc0fd7dc-a80c-46bf-824e-196f2be3d908", 00:26:11.607 "no_auto_visible": false 00:26:11.607 } 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "nvmf_subsystem_add_listener", 00:26:11.607 "params": { 00:26:11.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.607 "listen_address": { 00:26:11.607 "trtype": "TCP", 00:26:11.607 "adrfam": "IPv4", 00:26:11.607 "traddr": "10.0.0.2", 00:26:11.607 "trsvcid": "4420" 00:26:11.607 }, 00:26:11.607 "secure_channel": false, 00:26:11.607 "sock_impl": "ssl" 00:26:11.607 } 00:26:11.607 } 00:26:11.607 ] 00:26:11.607 } 00:26:11.607 ] 00:26:11.607 }' 00:26:11.607 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:11.607 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:26:11.607 "subsystems": [ 00:26:11.607 { 00:26:11.607 "subsystem": "keyring", 00:26:11.607 "config": [ 00:26:11.607 { 00:26:11.607 "method": "keyring_file_add_key", 00:26:11.607 "params": { 00:26:11.607 "name": "key0", 00:26:11.607 "path": "/tmp/tmp.5ZvUEt5CLi" 00:26:11.607 } 00:26:11.607 } 00:26:11.607 ] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "iobuf", 00:26:11.607 "config": [ 00:26:11.607 { 00:26:11.607 "method": "iobuf_set_options", 00:26:11.607 "params": { 00:26:11.607 "small_pool_count": 8192, 00:26:11.607 "large_pool_count": 1024, 00:26:11.607 "small_bufsize": 8192, 00:26:11.607 "large_bufsize": 135168 00:26:11.607 } 00:26:11.607 } 00:26:11.607 ] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "sock", 00:26:11.607 "config": [ 00:26:11.607 { 00:26:11.607 "method": "sock_set_default_impl", 00:26:11.607 "params": { 00:26:11.607 "impl_name": "posix" 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "sock_impl_set_options", 00:26:11.607 "params": { 00:26:11.607 "impl_name": "ssl", 00:26:11.607 "recv_buf_size": 4096, 00:26:11.607 "send_buf_size": 4096, 00:26:11.607 "enable_recv_pipe": true, 00:26:11.607 "enable_quickack": false, 00:26:11.607 "enable_placement_id": 0, 00:26:11.607 "enable_zerocopy_send_server": true, 00:26:11.607 "enable_zerocopy_send_client": false, 00:26:11.607 "zerocopy_threshold": 0, 00:26:11.607 "tls_version": 0, 00:26:11.607 "enable_ktls": false 00:26:11.607 } 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "method": "sock_impl_set_options", 00:26:11.607 "params": { 00:26:11.607 "impl_name": "posix", 00:26:11.607 "recv_buf_size": 2097152, 00:26:11.607 "send_buf_size": 2097152, 00:26:11.607 "enable_recv_pipe": true, 00:26:11.607 "enable_quickack": false, 00:26:11.607 "enable_placement_id": 0, 00:26:11.607 "enable_zerocopy_send_server": true, 00:26:11.607 "enable_zerocopy_send_client": false, 00:26:11.607 "zerocopy_threshold": 0, 00:26:11.607 "tls_version": 0, 00:26:11.607 "enable_ktls": false 00:26:11.607 } 00:26:11.607 } 00:26:11.607 ] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "vmd", 00:26:11.607 "config": [] 00:26:11.607 }, 00:26:11.607 { 00:26:11.607 "subsystem": "accel", 00:26:11.607 "config": [ 00:26:11.607 { 00:26:11.607 "method": "accel_set_options", 00:26:11.607 "params": { 00:26:11.607 "small_cache_size": 128, 00:26:11.607 "large_cache_size": 16, 00:26:11.607 "task_count": 2048, 00:26:11.607 "sequence_count": 2048, 00:26:11.608 "buf_count": 2048 00:26:11.608 } 00:26:11.608 } 00:26:11.608 ] 00:26:11.608 }, 00:26:11.608 { 00:26:11.608 "subsystem": "bdev", 00:26:11.608 "config": [ 00:26:11.608 { 00:26:11.608 "method": "bdev_set_options", 00:26:11.608 "params": { 00:26:11.608 "bdev_io_pool_size": 65535, 00:26:11.608 "bdev_io_cache_size": 256, 00:26:11.608 "bdev_auto_examine": true, 00:26:11.608 "iobuf_small_cache_size": 128, 00:26:11.608 "iobuf_large_cache_size": 16 00:26:11.608 } 00:26:11.608 }, 00:26:11.608 { 00:26:11.608 "method": "bdev_raid_set_options", 00:26:11.608 "params": { 00:26:11.608 "process_window_size_kb": 1024, 00:26:11.608 "process_max_bandwidth_mb_sec": 0 00:26:11.608 } 00:26:11.608 }, 00:26:11.608 { 00:26:11.608 "method": "bdev_iscsi_set_options", 00:26:11.608 "params": { 00:26:11.608 "timeout_sec": 30 00:26:11.608 } 00:26:11.608 }, 00:26:11.608 { 00:26:11.608 "method": "bdev_nvme_set_options", 00:26:11.608 "params": { 00:26:11.608 "action_on_timeout": "none", 00:26:11.608 "timeout_us": 0, 00:26:11.608 "timeout_admin_us": 0, 00:26:11.608 "keep_alive_timeout_ms": 10000, 00:26:11.608 "arbitration_burst": 0, 00:26:11.608 "low_priority_weight": 0, 00:26:11.608 "medium_priority_weight": 0, 00:26:11.608 "high_priority_weight": 0, 00:26:11.608 "nvme_adminq_poll_period_us": 10000, 00:26:11.608 "nvme_ioq_poll_period_us": 0, 00:26:11.608 "io_queue_requests": 512, 00:26:11.608 "delay_cmd_submit": true, 00:26:11.608 "transport_retry_count": 4, 00:26:11.608 "bdev_retry_count": 3, 00:26:11.608 "transport_ack_timeout": 0, 00:26:11.608 "ctrlr_loss_timeout_sec": 0, 00:26:11.608 "reconnect_delay_sec": 0, 00:26:11.608 "fast_io_fail_timeout_sec": 0, 00:26:11.608 "disable_auto_failback": false, 00:26:11.608 "generate_uuids": false, 00:26:11.608 "transport_tos": 0, 00:26:11.608 "nvme_error_stat": false, 00:26:11.608 "rdma_srq_size": 0, 00:26:11.608 "io_path_stat": false, 00:26:11.608 "allow_accel_sequence": false, 00:26:11.608 "rdma_max_cq_size": 0, 00:26:11.608 "rdma_cm_event_timeout_ms": 0, 00:26:11.608 "dhchap_digests": [ 00:26:11.608 "sha256", 00:26:11.608 "sha384", 00:26:11.608 "sha512" 00:26:11.608 ], 00:26:11.608 "dhchap_dhgroups": [ 00:26:11.608 "null", 00:26:11.608 "ffdhe2048", 00:26:11.608 "ffdhe3072", 00:26:11.608 "ffdhe4096", 00:26:11.608 "ffdhe6144", 00:26:11.608 "ffdhe8192" 00:26:11.608 ] 00:26:11.608 } 00:26:11.608 }, 00:26:11.608 { 00:26:11.608 "method": "bdev_nvme_attach_controller", 00:26:11.608 "params": { 00:26:11.608 "name": "nvme0", 00:26:11.608 "trtype": "TCP", 00:26:11.608 "adrfam": "IPv4", 00:26:11.608 "traddr": "10.0.0.2", 00:26:11.608 "trsvcid": "4420", 00:26:11.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.608 "prchk_reftag": false, 00:26:11.608 "prchk_guard": false, 00:26:11.608 "ctrlr_loss_timeout_sec": 0, 00:26:11.608 "reconnect_delay_sec": 0, 00:26:11.608 "fast_io_fail_timeout_sec": 0, 00:26:11.608 "psk": "key0", 00:26:11.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:11.608 "hdgst": false, 00:26:11.608 "ddgst": false 00:26:11.608 } 00:26:11.608 }, 00:26:11.608 { 00:26:11.608 "method": "bdev_nvme_set_hotplug", 00:26:11.608 "params": { 00:26:11.608 "period_us": 100000, 00:26:11.608 "enable": false 00:26:11.608 } 00:26:11.608 }, 00:26:11.608 { 00:26:11.608 "method": "bdev_enable_histogram", 00:26:11.608 "params": { 00:26:11.608 "name": "nvme0n1", 00:26:11.608 "enable": true 00:26:11.608 } 00:26:11.608 }, 00:26:11.608 { 00:26:11.608 "method": "bdev_wait_for_examine" 00:26:11.608 } 00:26:11.608 ] 00:26:11.608 }, 00:26:11.608 { 00:26:11.608 "subsystem": "nbd", 00:26:11.608 "config": [] 00:26:11.608 } 00:26:11.608 ] 00:26:11.608 }' 00:26:11.608 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 3688443 00:26:11.608 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3688443 ']' 00:26:11.608 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3688443 00:26:11.608 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:11.608 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:11.608 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3688443 00:26:11.869 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:11.869 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:11.869 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3688443' 00:26:11.869 killing process with pid 3688443 00:26:11.869 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3688443 00:26:11.869 Received shutdown signal, test time was about 1.000000 seconds 00:26:11.869 00:26:11.869 Latency(us) 00:26:11.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.869 =================================================================================================================== 00:26:11.869 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.869 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3688443 00:26:12.445 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 3688235 00:26:12.445 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3688235 ']' 00:26:12.445 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3688235 00:26:12.445 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:12.445 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:12.445 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3688235 00:26:12.445 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:12.445 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:12.445 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3688235' 00:26:12.445 killing process with pid 3688235 00:26:12.445 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3688235 00:26:12.445 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3688235 00:26:13.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:26:13.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:13.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:13.428 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:26:13.428 "subsystems": [ 00:26:13.428 { 00:26:13.428 "subsystem": "keyring", 00:26:13.428 "config": [ 00:26:13.428 { 00:26:13.428 "method": "keyring_file_add_key", 00:26:13.428 "params": { 00:26:13.428 "name": "key0", 00:26:13.428 "path": "/tmp/tmp.5ZvUEt5CLi" 00:26:13.428 } 00:26:13.428 } 00:26:13.428 ] 00:26:13.428 }, 00:26:13.428 { 00:26:13.428 "subsystem": "iobuf", 00:26:13.428 "config": [ 00:26:13.428 { 00:26:13.428 "method": "iobuf_set_options", 00:26:13.428 "params": { 00:26:13.428 "small_pool_count": 8192, 00:26:13.428 "large_pool_count": 1024, 00:26:13.428 "small_bufsize": 8192, 00:26:13.428 "large_bufsize": 135168 00:26:13.428 } 00:26:13.428 } 00:26:13.428 ] 00:26:13.428 }, 00:26:13.428 { 00:26:13.428 "subsystem": "sock", 00:26:13.428 "config": [ 00:26:13.428 { 00:26:13.428 "method": "sock_set_default_impl", 00:26:13.428 "params": { 00:26:13.428 "impl_name": "posix" 00:26:13.428 } 00:26:13.428 }, 00:26:13.428 { 00:26:13.428 "method": "sock_impl_set_options", 00:26:13.428 "params": { 00:26:13.428 "impl_name": "ssl", 00:26:13.428 "recv_buf_size": 4096, 00:26:13.428 "send_buf_size": 4096, 00:26:13.428 "enable_recv_pipe": true, 00:26:13.428 "enable_quickack": false, 00:26:13.428 "enable_placement_id": 0, 00:26:13.428 "enable_zerocopy_send_server": true, 00:26:13.428 "enable_zerocopy_send_client": false, 00:26:13.428 "zerocopy_threshold": 0, 00:26:13.428 "tls_version": 0, 00:26:13.428 "enable_ktls": false 00:26:13.428 } 00:26:13.428 }, 00:26:13.428 { 00:26:13.428 "method": "sock_impl_set_options", 00:26:13.428 "params": { 00:26:13.428 "impl_name": "posix", 00:26:13.428 "recv_buf_size": 2097152, 00:26:13.428 "send_buf_size": 2097152, 00:26:13.428 "enable_recv_pipe": true, 00:26:13.428 "enable_quickack": false, 00:26:13.428 "enable_placement_id": 0, 00:26:13.428 "enable_zerocopy_send_server": true, 00:26:13.428 "enable_zerocopy_send_client": false, 00:26:13.428 "zerocopy_threshold": 0, 00:26:13.428 "tls_version": 0, 00:26:13.428 "enable_ktls": false 00:26:13.428 } 00:26:13.428 } 00:26:13.428 ] 00:26:13.428 }, 00:26:13.428 { 00:26:13.428 "subsystem": "vmd", 00:26:13.428 "config": [] 00:26:13.428 }, 00:26:13.428 { 00:26:13.428 "subsystem": "accel", 00:26:13.428 "config": [ 00:26:13.428 { 00:26:13.428 "method": "accel_set_options", 00:26:13.428 "params": { 00:26:13.428 "small_cache_size": 128, 00:26:13.428 "large_cache_size": 16, 00:26:13.428 "task_count": 2048, 00:26:13.428 "sequence_count": 2048, 00:26:13.428 "buf_count": 2048 00:26:13.428 } 00:26:13.428 } 00:26:13.428 ] 00:26:13.428 }, 00:26:13.428 { 00:26:13.428 "subsystem": "bdev", 00:26:13.428 "config": [ 00:26:13.428 { 00:26:13.428 "method": "bdev_set_options", 00:26:13.428 "params": { 00:26:13.428 "bdev_io_pool_size": 65535, 00:26:13.428 "bdev_io_cache_size": 256, 00:26:13.428 "bdev_auto_examine": true, 00:26:13.428 "iobuf_small_cache_size": 128, 00:26:13.428 "iobuf_large_cache_size": 16 00:26:13.428 } 00:26:13.428 }, 00:26:13.428 { 00:26:13.428 "method": "bdev_raid_set_options", 00:26:13.428 "params": { 00:26:13.428 "process_window_size_kb": 1024, 00:26:13.428 "process_max_bandwidth_mb_sec": 0 00:26:13.428 } 00:26:13.428 }, 00:26:13.428 { 00:26:13.428 "method": "bdev_iscsi_set_options", 00:26:13.428 "params": { 00:26:13.428 "timeout_sec": 30 00:26:13.428 } 00:26:13.428 }, 00:26:13.428 { 00:26:13.428 "method": "bdev_nvme_set_options", 00:26:13.428 "params": { 00:26:13.428 "action_on_timeout": "none", 00:26:13.428 "timeout_us": 0, 00:26:13.428 "timeout_admin_us": 0, 00:26:13.428 "keep_alive_timeout_ms": 10000, 00:26:13.428 "arbitration_burst": 0, 00:26:13.428 "low_priority_weight": 0, 00:26:13.428 "medium_priority_weight": 0, 00:26:13.428 "high_priority_weight": 0, 00:26:13.428 "nvme_adminq_poll_period_us": 10000, 00:26:13.428 "nvme_ioq_poll_period_us": 0, 00:26:13.428 "io_queue_requests": 0, 00:26:13.428 "delay_cmd_submit": true, 00:26:13.428 "transport_retry_count": 4, 00:26:13.428 "bdev_retry_count": 3, 00:26:13.428 "transport_ack_timeout": 0, 00:26:13.428 "ctrlr_loss_timeout_sec": 0, 00:26:13.428 "reconnect_delay_sec": 0, 00:26:13.428 "fast_io_fail_timeout_sec": 0, 00:26:13.428 "disable_auto_failback": false, 00:26:13.428 "generate_uuids": false, 00:26:13.428 "transport_tos": 0, 00:26:13.428 "nvme_error_stat": false, 00:26:13.429 "rdma_srq_size": 0, 00:26:13.429 "io_path_stat": false, 00:26:13.429 "allow_accel_sequence": false, 00:26:13.429 "rdma_max_cq_size": 0, 00:26:13.429 "rdma_cm_event_timeout_ms": 0, 00:26:13.429 "dhchap_digests": [ 00:26:13.429 "sha256", 00:26:13.429 "sha384", 00:26:13.429 "sha512" 00:26:13.429 ], 00:26:13.429 "dhchap_dhgroups": [ 00:26:13.429 "null", 00:26:13.429 "ffdhe2048", 00:26:13.429 "ffdhe3072", 00:26:13.429 "ffdhe4096", 00:26:13.429 "ffdhe6144", 00:26:13.429 "ffdhe8192" 00:26:13.429 ] 00:26:13.429 } 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "method": "bdev_nvme_set_hotplug", 00:26:13.429 "params": { 00:26:13.429 "period_us": 100000, 00:26:13.429 "enable": false 00:26:13.429 } 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "method": "bdev_malloc_create", 00:26:13.429 "params": { 00:26:13.429 "name": "malloc0", 00:26:13.429 "num_blocks": 8192, 00:26:13.429 "block_size": 4096, 00:26:13.429 "physical_block_size": 4096, 00:26:13.429 "uuid": "dc0fd7dc-a80c-46bf-824e-196f2be3d908", 00:26:13.429 "optimal_io_boundary": 0, 00:26:13.429 "md_size": 0, 00:26:13.429 "dif_type": 0, 00:26:13.429 "dif_is_head_of_md": false, 00:26:13.429 "dif_pi_format": 0 00:26:13.429 } 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "method": "bdev_wait_for_examine" 00:26:13.429 } 00:26:13.429 ] 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "subsystem": "nbd", 00:26:13.429 "config": [] 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "subsystem": "scheduler", 00:26:13.429 "config": [ 00:26:13.429 { 00:26:13.429 "method": "framework_set_scheduler", 00:26:13.429 "params": { 00:26:13.429 "name": "static" 00:26:13.429 } 00:26:13.429 } 00:26:13.429 ] 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "subsystem": "nvmf", 00:26:13.429 "config": [ 00:26:13.429 { 00:26:13.429 "method": "nvmf_set_config", 00:26:13.429 "params": { 00:26:13.429 "discovery_filter": "match_any", 00:26:13.429 "admin_cmd_passthru": { 00:26:13.429 "identify_ctrlr": false 00:26:13.429 } 00:26:13.429 } 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "method": "nvmf_set_max_subsystems", 00:26:13.429 "params": { 00:26:13.429 "max_subsystems": 1024 00:26:13.429 } 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "method": "nvmf_set_crdt", 00:26:13.429 "params": { 00:26:13.429 "crdt1": 0, 00:26:13.429 "crdt2": 0, 00:26:13.429 "crdt3": 0 00:26:13.429 } 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "method": "nvmf_create_transport", 00:26:13.429 "params": { 00:26:13.429 "trtype": "TCP", 00:26:13.429 "max_queue_depth": 128, 00:26:13.429 "max_io_qpairs_per_ctrlr": 127, 00:26:13.429 "in_capsule_data_size": 4096, 00:26:13.429 "max_io_size": 131072, 00:26:13.429 "io_unit_size": 131072, 00:26:13.429 "max_aq_depth": 128, 00:26:13.429 "num_shared_buffers": 511, 00:26:13.429 "buf_cache_size": 4294967295, 00:26:13.429 "dif_insert_or_strip": false, 00:26:13.429 "zcopy": false, 00:26:13.429 "c2h_success": false, 00:26:13.429 "sock_priority": 0, 00:26:13.429 "abort_timeout_sec": 1, 00:26:13.429 "ack_timeout": 0, 00:26:13.429 "data_wr_pool_size": 0 00:26:13.429 } 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "method": "nvmf_create_subsystem", 00:26:13.429 "params": { 00:26:13.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.429 "allow_any_host": false, 00:26:13.429 "serial_number": "00000000000000000000", 00:26:13.429 "model_number": "SPDK bdev Controller", 00:26:13.429 "max_namespaces": 32, 00:26:13.429 "min_cntlid": 1, 00:26:13.429 "max_cntlid": 65519, 00:26:13.429 "ana_reporting": false 00:26:13.429 } 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "method": "nvmf_subsystem_add_host", 00:26:13.429 "params": { 00:26:13.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.429 "host": "nqn.2016-06.io.spdk:host1", 00:26:13.429 "psk": "key0" 00:26:13.429 } 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "method": "nvmf_subsystem_add_ns", 00:26:13.429 "params": { 00:26:13.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.429 "namespace": { 00:26:13.429 "nsid": 1, 00:26:13.429 "bdev_name": "malloc0", 00:26:13.429 "nguid": "DC0FD7DCA80C46BF824E196F2BE3D908", 00:26:13.429 "uuid": "dc0fd7dc-a80c-46bf-824e-196f2be3d908", 00:26:13.429 "no_auto_visible": false 00:26:13.429 } 00:26:13.429 } 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "method": "nvmf_subsystem_add_listener", 00:26:13.429 "params": { 00:26:13.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.429 "listen_address": { 00:26:13.429 "trtype": "TCP", 00:26:13.429 "adrfam": "IPv4", 00:26:13.429 "traddr": "10.0.0.2", 00:26:13.429 "trsvcid": "4420" 00:26:13.429 }, 00:26:13.429 "secure_channel": false, 00:26:13.429 "sock_impl": "ssl" 00:26:13.429 } 00:26:13.429 } 00:26:13.429 ] 00:26:13.429 } 00:26:13.429 ] 00:26:13.429 }' 00:26:13.429 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:13.429 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3689280 00:26:13.429 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3689280 00:26:13.429 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:26:13.429 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3689280 ']' 00:26:13.429 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.429 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.429 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.429 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.429 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:13.429 [2024-07-22 20:33:25.241933] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:13.429 [2024-07-22 20:33:25.242046] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.429 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.429 [2024-07-22 20:33:25.365331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.691 [2024-07-22 20:33:25.542787] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.691 [2024-07-22 20:33:25.542830] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.691 [2024-07-22 20:33:25.542843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.691 [2024-07-22 20:33:25.542852] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.691 [2024-07-22 20:33:25.542864] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.691 [2024-07-22 20:33:25.542939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.952 [2024-07-22 20:33:25.947118] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.213 [2024-07-22 20:33:25.979132] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:14.213 [2024-07-22 20:33:25.979367] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3689479 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3689479 /var/tmp/bdevperf.sock 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3689479 ']' 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:14.213 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:26:14.213 "subsystems": [ 00:26:14.213 { 00:26:14.213 "subsystem": "keyring", 00:26:14.213 "config": [ 00:26:14.213 { 00:26:14.213 "method": "keyring_file_add_key", 00:26:14.213 "params": { 00:26:14.213 "name": "key0", 00:26:14.213 "path": "/tmp/tmp.5ZvUEt5CLi" 00:26:14.213 } 00:26:14.213 } 00:26:14.213 ] 00:26:14.213 }, 00:26:14.213 { 00:26:14.214 "subsystem": "iobuf", 00:26:14.214 "config": [ 00:26:14.214 { 00:26:14.214 "method": "iobuf_set_options", 00:26:14.214 "params": { 00:26:14.214 "small_pool_count": 8192, 00:26:14.214 "large_pool_count": 1024, 00:26:14.214 "small_bufsize": 8192, 00:26:14.214 "large_bufsize": 135168 00:26:14.214 } 00:26:14.214 } 00:26:14.214 ] 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "subsystem": "sock", 00:26:14.214 "config": [ 00:26:14.214 { 00:26:14.214 "method": "sock_set_default_impl", 00:26:14.214 "params": { 00:26:14.214 "impl_name": "posix" 00:26:14.214 } 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "method": "sock_impl_set_options", 00:26:14.214 "params": { 00:26:14.214 "impl_name": "ssl", 00:26:14.214 "recv_buf_size": 4096, 00:26:14.214 "send_buf_size": 4096, 00:26:14.214 "enable_recv_pipe": true, 00:26:14.214 "enable_quickack": false, 00:26:14.214 "enable_placement_id": 0, 00:26:14.214 "enable_zerocopy_send_server": true, 00:26:14.214 "enable_zerocopy_send_client": false, 00:26:14.214 "zerocopy_threshold": 0, 00:26:14.214 "tls_version": 0, 00:26:14.214 "enable_ktls": false 00:26:14.214 } 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "method": "sock_impl_set_options", 00:26:14.214 "params": { 00:26:14.214 "impl_name": "posix", 00:26:14.214 "recv_buf_size": 2097152, 00:26:14.214 "send_buf_size": 2097152, 00:26:14.214 "enable_recv_pipe": true, 00:26:14.214 "enable_quickack": false, 00:26:14.214 "enable_placement_id": 0, 00:26:14.214 "enable_zerocopy_send_server": true, 00:26:14.214 "enable_zerocopy_send_client": false, 00:26:14.214 "zerocopy_threshold": 0, 00:26:14.214 "tls_version": 0, 00:26:14.214 "enable_ktls": false 00:26:14.214 } 00:26:14.214 } 00:26:14.214 ] 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "subsystem": "vmd", 00:26:14.214 "config": [] 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "subsystem": "accel", 00:26:14.214 "config": [ 00:26:14.214 { 00:26:14.214 "method": "accel_set_options", 00:26:14.214 "params": { 00:26:14.214 "small_cache_size": 128, 00:26:14.214 "large_cache_size": 16, 00:26:14.214 "task_count": 2048, 00:26:14.214 "sequence_count": 2048, 00:26:14.214 "buf_count": 2048 00:26:14.214 } 00:26:14.214 } 00:26:14.214 ] 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "subsystem": "bdev", 00:26:14.214 "config": [ 00:26:14.214 { 00:26:14.214 "method": "bdev_set_options", 00:26:14.214 "params": { 00:26:14.214 "bdev_io_pool_size": 65535, 00:26:14.214 "bdev_io_cache_size": 256, 00:26:14.214 "bdev_auto_examine": true, 00:26:14.214 "iobuf_small_cache_size": 128, 00:26:14.214 "iobuf_large_cache_size": 16 00:26:14.214 } 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "method": "bdev_raid_set_options", 00:26:14.214 "params": { 00:26:14.214 "process_window_size_kb": 1024, 00:26:14.214 "process_max_bandwidth_mb_sec": 0 00:26:14.214 } 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "method": "bdev_iscsi_set_options", 00:26:14.214 "params": { 00:26:14.214 "timeout_sec": 30 00:26:14.214 } 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "method": "bdev_nvme_set_options", 00:26:14.214 "params": { 00:26:14.214 "action_on_timeout": "none", 00:26:14.214 "timeout_us": 0, 00:26:14.214 "timeout_admin_us": 0, 00:26:14.214 "keep_alive_timeout_ms": 10000, 00:26:14.214 "arbitration_burst": 0, 00:26:14.214 "low_priority_weight": 0, 00:26:14.214 "medium_priority_weight": 0, 00:26:14.214 "high_priority_weight": 0, 00:26:14.214 "nvme_adminq_poll_period_us": 10000, 00:26:14.214 "nvme_ioq_poll_period_us": 0, 00:26:14.214 "io_queue_requests": 512, 00:26:14.214 "delay_cmd_submit": true, 00:26:14.214 "transport_retry_count": 4, 00:26:14.214 "bdev_retry_count": 3, 00:26:14.214 "transport_ack_timeout": 0, 00:26:14.214 "ctrlr_loss_timeout_sec": 0, 00:26:14.214 "reconnect_delay_sec": 0, 00:26:14.214 "fast_io_fail_timeout_sec": 0, 00:26:14.214 "disable_auto_failback": false, 00:26:14.214 "generate_uuids": false, 00:26:14.214 "transport_tos": 0, 00:26:14.214 "nvme_error_stat": false, 00:26:14.214 "rdma_srq_size": 0, 00:26:14.214 "io_path_stat": false, 00:26:14.214 "allow_accel_sequence": false, 00:26:14.214 "rdma_max_cq_size": 0, 00:26:14.214 "rdma_cm_event_timeout_ms": 0, 00:26:14.214 "dhchap_digests": [ 00:26:14.214 "sha256", 00:26:14.214 "sha384", 00:26:14.214 "sha512" 00:26:14.214 ], 00:26:14.214 "dhchap_dhgroups": [ 00:26:14.214 "null", 00:26:14.214 "ffdhe2048", 00:26:14.214 "ffdhe3072", 00:26:14.214 "ffdhe4096", 00:26:14.214 "ffdhe6144", 00:26:14.214 "ffdhe8192" 00:26:14.214 ] 00:26:14.214 } 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "method": "bdev_nvme_attach_controller", 00:26:14.214 "params": { 00:26:14.214 "name": "nvme0", 00:26:14.214 "trtype": "TCP", 00:26:14.214 "adrfam": "IPv4", 00:26:14.214 "traddr": "10.0.0.2", 00:26:14.214 "trsvcid": "4420", 00:26:14.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.214 "prchk_reftag": false, 00:26:14.214 "prchk_guard": false, 00:26:14.214 "ctrlr_loss_timeout_sec": 0, 00:26:14.214 "reconnect_delay_sec": 0, 00:26:14.214 "fast_io_fail_timeout_sec": 0, 00:26:14.214 "psk": "key0", 00:26:14.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:14.214 "hdgst": false, 00:26:14.214 "ddgst": false 00:26:14.214 } 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "method": "bdev_nvme_set_hotplug", 00:26:14.214 "params": { 00:26:14.214 "period_us": 100000, 00:26:14.214 "enable": false 00:26:14.214 } 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "method": "bdev_enable_histogram", 00:26:14.214 "params": { 00:26:14.214 "name": "nvme0n1", 00:26:14.214 "enable": true 00:26:14.214 } 00:26:14.214 }, 00:26:14.214 { 00:26:14.214 "method": "bdev_wait_for_examine" 00:26:14.214 } 00:26:14.215 ] 00:26:14.215 }, 00:26:14.215 { 00:26:14.215 "subsystem": "nbd", 00:26:14.215 "config": [] 00:26:14.215 } 00:26:14.215 ] 00:26:14.215 }' 00:26:14.215 [2024-07-22 20:33:26.130358] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:14.215 [2024-07-22 20:33:26.130469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3689479 ] 00:26:14.215 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.475 [2024-07-22 20:33:26.251724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.475 [2024-07-22 20:33:26.387116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.736 [2024-07-22 20:33:26.635731] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:14.997 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.997 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:14.997 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:14.997 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:26:14.997 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.997 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:15.258 Running I/O for 1 seconds... 00:26:16.201 00:26:16.201 Latency(us) 00:26:16.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.201 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:16.201 Verification LBA range: start 0x0 length 0x2000 00:26:16.201 nvme0n1 : 1.02 2863.66 11.19 0.00 0.00 44217.03 6253.23 46967.47 00:26:16.201 =================================================================================================================== 00:26:16.201 Total : 2863.66 11.19 0.00 0.00 44217.03 6253.23 46967.47 00:26:16.201 0 00:26:16.201 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:26:16.201 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:26:16.201 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:26:16.201 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:26:16.201 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:26:16.201 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:26:16.201 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:16.201 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:26:16.201 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:26:16.201 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:26:16.201 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:16.201 nvmf_trace.0 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3689479 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3689479 ']' 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3689479 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3689479 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3689479' 00:26:16.462 killing process with pid 3689479 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3689479 00:26:16.462 Received shutdown signal, test time was about 1.000000 seconds 00:26:16.462 00:26:16.462 Latency(us) 00:26:16.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.462 =================================================================================================================== 00:26:16.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.462 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3689479 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:17.034 rmmod nvme_tcp 00:26:17.034 rmmod nvme_fabrics 00:26:17.034 rmmod nvme_keyring 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3689280 ']' 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3689280 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3689280 ']' 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3689280 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3689280 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3689280' 00:26:17.034 killing process with pid 3689280 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3689280 00:26:17.034 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3689280 00:26:17.976 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:17.976 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:17.976 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:17.976 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:17.976 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:17.976 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.976 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.976 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.520 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:20.520 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.GmaTotIEUl /tmp/tmp.RWEKb82mpj /tmp/tmp.5ZvUEt5CLi 00:26:20.520 00:26:20.520 real 1m34.856s 00:26:20.520 user 2m24.977s 00:26:20.520 sys 0m27.485s 00:26:20.520 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:20.520 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:20.520 ************************************ 00:26:20.520 END TEST nvmf_tls 00:26:20.520 ************************************ 00:26:20.520 20:33:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:26:20.520 20:33:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:20.520 20:33:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:20.520 20:33:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.520 20:33:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:20.520 ************************************ 00:26:20.520 START TEST nvmf_fips 00:26:20.520 ************************************ 00:26:20.520 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:20.520 * Looking for test storage... 00:26:20.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:26:20.520 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.520 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:26:20.520 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.520 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.520 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.520 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.520 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.520 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.520 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.520 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:20.521 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:26:20.522 Error setting digest 00:26:20.522 00C2AAA4967F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:26:20.522 00C2AAA4967F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:26:20.522 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:28.675 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.675 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:28.676 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:28.676 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:28.676 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:28.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:26:28.676 00:26:28.676 --- 10.0.0.2 ping statistics --- 00:26:28.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.676 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:26:28.676 00:26:28.676 --- 10.0.0.1 ping statistics --- 00:26:28.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.676 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3694332 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3694332 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3694332 ']' 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.676 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:28.676 [2024-07-22 20:33:39.697022] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:28.676 [2024-07-22 20:33:39.697147] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.676 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.676 [2024-07-22 20:33:39.848401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.676 [2024-07-22 20:33:40.091768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.676 [2024-07-22 20:33:40.091833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.676 [2024-07-22 20:33:40.091849] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.676 [2024-07-22 20:33:40.091860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.676 [2024-07-22 20:33:40.091871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.676 [2024-07-22 20:33:40.091918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.676 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.676 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:26:28.676 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.676 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:28.676 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:28.676 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.676 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:26:28.677 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:28.677 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:28.677 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:28.677 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:28.677 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:28.677 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:28.677 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:28.677 [2024-07-22 20:33:40.612813] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.677 [2024-07-22 20:33:40.628818] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:28.677 [2024-07-22 20:33:40.629123] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.677 [2024-07-22 20:33:40.686492] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:28.677 malloc0 00:26:28.938 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:28.938 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3694684 00:26:28.938 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3694684 /var/tmp/bdevperf.sock 00:26:28.938 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:28.938 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3694684 ']' 00:26:28.938 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:28.938 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.938 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:28.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:28.938 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.938 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:28.938 [2024-07-22 20:33:40.825668] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:28.938 [2024-07-22 20:33:40.825802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3694684 ] 00:26:28.938 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.938 [2024-07-22 20:33:40.936112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.199 [2024-07-22 20:33:41.076358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.770 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:29.770 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:26:29.770 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:29.770 [2024-07-22 20:33:41.691288] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:29.770 [2024-07-22 20:33:41.691389] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:29.770 TLSTESTn1 00:26:30.030 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:30.030 Running I/O for 10 seconds... 00:26:40.030 00:26:40.030 Latency(us) 00:26:40.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.030 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:40.030 Verification LBA range: start 0x0 length 0x2000 00:26:40.030 TLSTESTn1 : 10.09 2656.66 10.38 0.00 0.00 48028.18 5461.33 93934.93 00:26:40.030 =================================================================================================================== 00:26:40.030 Total : 2656.66 10.38 0.00 0.00 48028.18 5461.33 93934.93 00:26:40.030 0 00:26:40.030 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:40.030 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:40.030 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:26:40.030 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:26:40.030 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:26:40.030 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:40.030 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:26:40.030 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:26:40.030 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:26:40.030 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:40.030 nvmf_trace.0 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3694684 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3694684 ']' 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3694684 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3694684 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3694684' 00:26:40.290 killing process with pid 3694684 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3694684 00:26:40.290 Received shutdown signal, test time was about 10.000000 seconds 00:26:40.290 00:26:40.290 Latency(us) 00:26:40.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.290 =================================================================================================================== 00:26:40.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:40.290 [2024-07-22 20:33:52.184418] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:40.290 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3694684 00:26:40.861 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:40.861 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:40.861 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:26:40.861 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:40.862 rmmod nvme_tcp 00:26:40.862 rmmod nvme_fabrics 00:26:40.862 rmmod nvme_keyring 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3694332 ']' 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3694332 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3694332 ']' 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3694332 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3694332 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3694332' 00:26:40.862 killing process with pid 3694332 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3694332 00:26:40.862 [2024-07-22 20:33:52.844146] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:40.862 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3694332 00:26:41.802 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:41.802 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:41.802 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:41.802 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.802 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:41.802 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.802 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.802 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.761 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:43.761 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:43.761 00:26:43.761 real 0m23.611s 00:26:43.761 user 0m24.959s 00:26:43.761 sys 0m9.837s 00:26:43.761 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:43.761 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:43.761 ************************************ 00:26:43.761 END TEST nvmf_fips 00:26:43.761 ************************************ 00:26:43.761 20:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:26:43.761 20:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:26:43.761 20:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:43.761 20:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:43.761 20:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:43.761 20:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:43.761 ************************************ 00:26:43.761 START TEST nvmf_fuzz 00:26:43.761 ************************************ 00:26:43.761 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:43.761 * Looking for test storage... 00:26:43.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.022 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:26:44.023 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:50.623 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:50.624 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:50.624 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:50.624 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:50.624 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.624 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:50.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:26:50.887 00:26:50.887 --- 10.0.0.2 ping statistics --- 00:26:50.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.887 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:26:50.887 00:26:50.887 --- 10.0.0.1 ping statistics --- 00:26:50.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.887 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:50.887 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:51.149 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3701039 00:26:51.149 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:51.149 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:51.149 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3701039 00:26:51.149 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3701039 ']' 00:26:51.149 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.149 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.149 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.149 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.149 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:52.091 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:52.092 Malloc0 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:26:52.092 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:27:24.206 Fuzzing completed. Shutting down the fuzz application 00:27:24.206 00:27:24.206 Dumping successful admin opcodes: 00:27:24.206 8, 9, 10, 24, 00:27:24.206 Dumping successful io opcodes: 00:27:24.206 0, 9, 00:27:24.206 NS: 0x200003aefec0 I/O qp, Total commands completed: 818023, total successful commands: 4747, random_seed: 1993532928 00:27:24.206 NS: 0x200003aefec0 admin qp, Total commands completed: 102768, total successful commands: 849, random_seed: 3386207936 00:27:24.206 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:24.467 Fuzzing completed. Shutting down the fuzz application 00:27:24.467 00:27:24.467 Dumping successful admin opcodes: 00:27:24.467 24, 00:27:24.467 Dumping successful io opcodes: 00:27:24.467 00:27:24.467 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2681133537 00:27:24.467 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2681237111 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:24.467 rmmod nvme_tcp 00:27:24.467 rmmod nvme_fabrics 00:27:24.467 rmmod nvme_keyring 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3701039 ']' 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3701039 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3701039 ']' 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 3701039 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3701039 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3701039' 00:27:24.467 killing process with pid 3701039 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 3701039 00:27:24.467 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 3701039 00:27:25.410 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:25.410 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.410 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.410 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.410 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.410 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.410 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.410 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:27:27.957 00:27:27.957 real 0m43.866s 00:27:27.957 user 0m59.451s 00:27:27.957 sys 0m14.788s 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:27.957 ************************************ 00:27:27.957 END TEST nvmf_fuzz 00:27:27.957 ************************************ 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:27.957 ************************************ 00:27:27.957 START TEST nvmf_multiconnection 00:27:27.957 ************************************ 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:27.957 * Looking for test storage... 00:27:27.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.957 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.958 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:36.100 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:36.100 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:36.100 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:36.100 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.100 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:36.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:27:36.101 00:27:36.101 --- 10.0.0.2 ping statistics --- 00:27:36.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.101 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:27:36.101 00:27:36.101 --- 10.0.0.1 ping statistics --- 00:27:36.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.101 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3711688 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3711688 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 3711688 ']' 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:36.101 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 [2024-07-22 20:34:47.044965] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:36.101 [2024-07-22 20:34:47.045067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.101 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.101 [2024-07-22 20:34:47.180242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.101 [2024-07-22 20:34:47.366318] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.101 [2024-07-22 20:34:47.366360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.101 [2024-07-22 20:34:47.366372] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.101 [2024-07-22 20:34:47.366382] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.101 [2024-07-22 20:34:47.366392] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.101 [2024-07-22 20:34:47.366574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.101 [2024-07-22 20:34:47.366657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.101 [2024-07-22 20:34:47.366796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.101 [2024-07-22 20:34:47.366822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 [2024-07-22 20:34:47.836827] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 Malloc1 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 [2024-07-22 20:34:47.941533] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.101 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 Malloc2 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.101 Malloc3 00:27:36.101 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.102 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:36.102 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.102 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.102 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.102 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:36.102 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.102 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.102 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.102 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:36.102 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.102 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 Malloc4 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 Malloc5 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.363 Malloc6 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.363 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.623 Malloc7 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.623 Malloc8 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:36.623 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.624 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 Malloc9 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 Malloc10 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 Malloc11 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.884 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:38.825 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:38.825 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:38.825 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:38.825 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:38.825 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:40.739 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:40.739 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:40.739 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:27:40.739 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:40.739 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:40.739 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:40.739 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.739 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:42.123 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:42.123 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:42.123 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:42.123 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:42.123 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:44.037 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:44.037 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:44.037 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:27:44.037 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:44.037 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:44.037 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:44.037 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:44.037 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:45.950 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:45.950 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:45.950 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:45.950 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:45.950 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:47.863 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:47.864 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:47.864 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:27:47.864 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:47.864 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:47.864 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:47.864 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:47.864 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:49.777 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:49.777 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:49.777 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:49.777 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:49.777 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:51.691 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:51.691 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:51.691 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:27:51.691 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:51.691 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:51.691 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:51.691 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:51.691 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:53.075 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:53.075 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:53.075 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:53.075 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:53.075 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:54.988 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:54.988 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:54.988 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:27:55.249 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:55.249 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:55.249 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:55.249 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:55.249 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:57.162 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:57.162 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:57.162 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:57.162 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:57.162 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:59.076 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:59.076 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:59.076 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:27:59.076 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:59.076 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:59.076 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:59.076 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:59.076 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:28:00.988 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:28:00.988 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:00.988 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:00.988 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:00.988 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:02.931 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:02.931 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:02.931 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:28:02.931 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:02.931 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:02.931 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:02.931 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:02.931 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:28:04.843 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:28:04.843 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:04.844 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:04.844 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:04.844 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:06.754 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:06.754 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:06.754 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:28:06.754 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:06.754 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:06.754 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:06.754 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:06.754 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:28:08.666 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:28:08.666 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:08.666 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:08.667 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:08.667 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:10.580 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:10.580 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:10.580 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:28:10.580 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:10.580 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:10.580 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:10.580 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:10.580 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:28:12.491 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:28:12.491 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:12.491 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:12.491 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:12.491 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:14.401 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:14.401 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:14.401 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:28:14.401 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:14.401 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:14.401 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:14.401 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:14.401 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:28:16.312 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:28:16.313 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:16.313 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:16.313 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:16.313 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:18.223 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:18.223 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:18.223 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:28:18.223 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:18.223 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:18.223 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:18.223 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:28:18.223 [global] 00:28:18.223 thread=1 00:28:18.223 invalidate=1 00:28:18.223 rw=read 00:28:18.223 time_based=1 00:28:18.223 runtime=10 00:28:18.223 ioengine=libaio 00:28:18.223 direct=1 00:28:18.223 bs=262144 00:28:18.223 iodepth=64 00:28:18.223 norandommap=1 00:28:18.223 numjobs=1 00:28:18.223 00:28:18.223 [job0] 00:28:18.223 filename=/dev/nvme0n1 00:28:18.223 [job1] 00:28:18.223 filename=/dev/nvme10n1 00:28:18.223 [job2] 00:28:18.223 filename=/dev/nvme1n1 00:28:18.223 [job3] 00:28:18.223 filename=/dev/nvme2n1 00:28:18.223 [job4] 00:28:18.223 filename=/dev/nvme3n1 00:28:18.223 [job5] 00:28:18.223 filename=/dev/nvme4n1 00:28:18.223 [job6] 00:28:18.223 filename=/dev/nvme5n1 00:28:18.223 [job7] 00:28:18.223 filename=/dev/nvme6n1 00:28:18.223 [job8] 00:28:18.223 filename=/dev/nvme7n1 00:28:18.223 [job9] 00:28:18.223 filename=/dev/nvme8n1 00:28:18.223 [job10] 00:28:18.223 filename=/dev/nvme9n1 00:28:18.223 Could not set queue depth (nvme0n1) 00:28:18.223 Could not set queue depth (nvme10n1) 00:28:18.223 Could not set queue depth (nvme1n1) 00:28:18.223 Could not set queue depth (nvme2n1) 00:28:18.223 Could not set queue depth (nvme3n1) 00:28:18.223 Could not set queue depth (nvme4n1) 00:28:18.223 Could not set queue depth (nvme5n1) 00:28:18.223 Could not set queue depth (nvme6n1) 00:28:18.223 Could not set queue depth (nvme7n1) 00:28:18.223 Could not set queue depth (nvme8n1) 00:28:18.223 Could not set queue depth (nvme9n1) 00:28:18.820 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:18.820 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:18.820 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:18.820 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:18.820 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:18.820 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:18.820 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:18.820 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:18.820 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:18.820 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:18.820 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:18.820 fio-3.35 00:28:18.820 Starting 11 threads 00:28:31.048 00:28:31.048 job0: (groupid=0, jobs=1): err= 0: pid=3720428: Mon Jul 22 20:35:41 2024 00:28:31.048 read: IOPS=814, BW=204MiB/s (214MB/s)(2048MiB/10051msec) 00:28:31.048 slat (usec): min=6, max=81983, avg=1070.03, stdev=3344.08 00:28:31.048 clat (msec): min=3, max=192, avg=77.37, stdev=29.85 00:28:31.048 lat (msec): min=3, max=192, avg=78.44, stdev=30.31 00:28:31.048 clat percentiles (msec): 00:28:31.048 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 35], 20.00th=[ 55], 00:28:31.048 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 79], 60.00th=[ 87], 00:28:31.048 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 116], 95.00th=[ 124], 00:28:31.048 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 150], 99.95th=[ 155], 00:28:31.048 | 99.99th=[ 192] 00:28:31.048 bw ( KiB/s): min=135168, max=299008, per=9.72%, avg=208076.80, stdev=46263.89, samples=20 00:28:31.048 iops : min= 528, max= 1168, avg=812.80, stdev=180.72, samples=20 00:28:31.048 lat (msec) : 4=0.10%, 10=0.63%, 20=3.16%, 50=12.32%, 100=58.67% 00:28:31.048 lat (msec) : 250=25.11% 00:28:31.048 cpu : usr=0.29%, sys=2.45%, ctx=1911, majf=0, minf=4097 00:28:31.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:31.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.048 issued rwts: total=8191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.048 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.048 job1: (groupid=0, jobs=1): err= 0: pid=3720429: Mon Jul 22 20:35:41 2024 00:28:31.048 read: IOPS=615, BW=154MiB/s (161MB/s)(1555MiB/10104msec) 00:28:31.048 slat (usec): min=9, max=79170, avg=1436.27, stdev=3928.44 00:28:31.048 clat (msec): min=15, max=238, avg=102.37, stdev=21.81 00:28:31.048 lat (msec): min=15, max=238, avg=103.80, stdev=22.20 00:28:31.048 clat percentiles (msec): 00:28:31.048 | 1.00th=[ 44], 5.00th=[ 67], 10.00th=[ 77], 20.00th=[ 87], 00:28:31.048 | 30.00th=[ 94], 40.00th=[ 100], 50.00th=[ 104], 60.00th=[ 108], 00:28:31.048 | 70.00th=[ 113], 80.00th=[ 118], 90.00th=[ 127], 95.00th=[ 136], 00:28:31.048 | 99.00th=[ 163], 99.50th=[ 186], 99.90th=[ 207], 99.95th=[ 222], 00:28:31.048 | 99.99th=[ 239] 00:28:31.048 bw ( KiB/s): min=132096, max=199680, per=7.36%, avg=157619.20, stdev=20417.44, samples=20 00:28:31.048 iops : min= 516, max= 780, avg=615.70, stdev=79.76, samples=20 00:28:31.048 lat (msec) : 20=0.06%, 50=2.11%, 100=40.29%, 250=57.54% 00:28:31.048 cpu : usr=0.19%, sys=1.95%, ctx=1481, majf=0, minf=4097 00:28:31.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:31.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.048 issued rwts: total=6220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.048 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.048 job2: (groupid=0, jobs=1): err= 0: pid=3720430: Mon Jul 22 20:35:41 2024 00:28:31.048 read: IOPS=651, BW=163MiB/s (171MB/s)(1644MiB/10091msec) 00:28:31.048 slat (usec): min=7, max=36106, avg=1470.41, stdev=3678.90 00:28:31.048 clat (msec): min=13, max=210, avg=96.57, stdev=22.54 00:28:31.048 lat (msec): min=14, max=211, avg=98.04, stdev=23.00 00:28:31.048 clat percentiles (msec): 00:28:31.048 | 1.00th=[ 39], 5.00th=[ 53], 10.00th=[ 70], 20.00th=[ 81], 00:28:31.048 | 30.00th=[ 88], 40.00th=[ 93], 50.00th=[ 97], 60.00th=[ 103], 00:28:31.048 | 70.00th=[ 110], 80.00th=[ 116], 90.00th=[ 123], 95.00th=[ 129], 00:28:31.048 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 199], 99.95th=[ 199], 00:28:31.048 | 99.99th=[ 211] 00:28:31.048 bw ( KiB/s): min=128000, max=300544, per=7.79%, avg=166758.40, stdev=37536.09, samples=20 00:28:31.048 iops : min= 500, max= 1174, avg=651.40, stdev=146.63, samples=20 00:28:31.048 lat (msec) : 20=0.12%, 50=4.61%, 100=51.38%, 250=43.90% 00:28:31.048 cpu : usr=0.31%, sys=1.84%, ctx=1458, majf=0, minf=3534 00:28:31.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:28:31.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.048 issued rwts: total=6577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.048 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.048 job3: (groupid=0, jobs=1): err= 0: pid=3720431: Mon Jul 22 20:35:41 2024 00:28:31.048 read: IOPS=600, BW=150MiB/s (157MB/s)(1514MiB/10094msec) 00:28:31.048 slat (usec): min=9, max=54049, avg=1646.89, stdev=4044.66 00:28:31.048 clat (msec): min=27, max=216, avg=104.90, stdev=18.51 00:28:31.048 lat (msec): min=28, max=224, avg=106.54, stdev=18.91 00:28:31.048 clat percentiles (msec): 00:28:31.048 | 1.00th=[ 66], 5.00th=[ 78], 10.00th=[ 82], 20.00th=[ 90], 00:28:31.048 | 30.00th=[ 95], 40.00th=[ 101], 50.00th=[ 105], 60.00th=[ 110], 00:28:31.048 | 70.00th=[ 115], 80.00th=[ 121], 90.00th=[ 128], 95.00th=[ 133], 00:28:31.048 | 99.00th=[ 146], 99.50th=[ 163], 99.90th=[ 203], 99.95th=[ 218], 00:28:31.048 | 99.99th=[ 218] 00:28:31.048 bw ( KiB/s): min=123392, max=195584, per=7.17%, avg=153446.40, stdev=19750.97, samples=20 00:28:31.048 iops : min= 482, max= 764, avg=599.40, stdev=77.15, samples=20 00:28:31.048 lat (msec) : 50=0.41%, 100=40.09%, 250=59.50% 00:28:31.048 cpu : usr=0.31%, sys=2.46%, ctx=1375, majf=0, minf=4097 00:28:31.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:31.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.048 issued rwts: total=6057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.048 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.048 job4: (groupid=0, jobs=1): err= 0: pid=3720432: Mon Jul 22 20:35:41 2024 00:28:31.048 read: IOPS=1005, BW=251MiB/s (264MB/s)(2527MiB/10051msec) 00:28:31.048 slat (usec): min=7, max=33912, avg=981.58, stdev=2434.77 00:28:31.048 clat (msec): min=9, max=144, avg=62.60, stdev=15.63 00:28:31.048 lat (msec): min=9, max=149, avg=63.58, stdev=15.82 00:28:31.048 clat percentiles (msec): 00:28:31.048 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 52], 00:28:31.048 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 63], 00:28:31.048 | 70.00th=[ 66], 80.00th=[ 70], 90.00th=[ 79], 95.00th=[ 90], 00:28:31.048 | 99.00th=[ 127], 99.50th=[ 134], 99.90th=[ 138], 99.95th=[ 142], 00:28:31.048 | 99.99th=[ 144] 00:28:31.048 bw ( KiB/s): min=168960, max=315904, per=12.01%, avg=257152.00, stdev=44435.58, samples=20 00:28:31.048 iops : min= 660, max= 1234, avg=1004.50, stdev=173.58, samples=20 00:28:31.048 lat (msec) : 10=0.01%, 20=0.39%, 50=13.48%, 100=82.33%, 250=3.79% 00:28:31.048 cpu : usr=0.48%, sys=3.46%, ctx=2085, majf=0, minf=4097 00:28:31.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:31.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.048 issued rwts: total=10108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.048 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.048 job5: (groupid=0, jobs=1): err= 0: pid=3720435: Mon Jul 22 20:35:41 2024 00:28:31.048 read: IOPS=946, BW=237MiB/s (248MB/s)(2378MiB/10049msec) 00:28:31.048 slat (usec): min=5, max=70012, avg=820.76, stdev=2969.13 00:28:31.048 clat (msec): min=2, max=176, avg=66.72, stdev=31.48 00:28:31.048 lat (msec): min=2, max=204, avg=67.54, stdev=31.92 00:28:31.048 clat percentiles (msec): 00:28:31.048 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 36], 00:28:31.048 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 66], 60.00th=[ 73], 00:28:31.048 | 70.00th=[ 80], 80.00th=[ 95], 90.00th=[ 112], 95.00th=[ 124], 00:28:31.048 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 155], 99.95th=[ 155], 00:28:31.048 | 99.99th=[ 178] 00:28:31.049 bw ( KiB/s): min=148480, max=420864, per=11.30%, avg=241868.80, stdev=73617.21, samples=20 00:28:31.049 iops : min= 580, max= 1644, avg=944.80, stdev=287.57, samples=20 00:28:31.049 lat (msec) : 4=0.45%, 10=2.57%, 20=3.43%, 50=21.60%, 100=54.82% 00:28:31.049 lat (msec) : 250=17.14% 00:28:31.049 cpu : usr=0.37%, sys=2.85%, ctx=2219, majf=0, minf=4097 00:28:31.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:28:31.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.049 issued rwts: total=9511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.049 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.049 job6: (groupid=0, jobs=1): err= 0: pid=3720436: Mon Jul 22 20:35:41 2024 00:28:31.049 read: IOPS=811, BW=203MiB/s (213MB/s)(2031MiB/10016msec) 00:28:31.049 slat (usec): min=7, max=30467, avg=1197.25, stdev=2876.82 00:28:31.049 clat (msec): min=11, max=130, avg=77.64, stdev=18.55 00:28:31.049 lat (msec): min=11, max=152, avg=78.83, stdev=18.86 00:28:31.049 clat percentiles (msec): 00:28:31.049 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 63], 00:28:31.049 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 82], 00:28:31.049 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 109], 00:28:31.049 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 128], 99.95th=[ 130], 00:28:31.049 | 99.99th=[ 131] 00:28:31.049 bw ( KiB/s): min=152064, max=279040, per=9.64%, avg=206336.00, stdev=39087.47, samples=20 00:28:31.049 iops : min= 594, max= 1090, avg=806.00, stdev=152.69, samples=20 00:28:31.049 lat (msec) : 20=0.28%, 50=5.66%, 100=80.04%, 250=14.01% 00:28:31.049 cpu : usr=0.24%, sys=3.02%, ctx=1802, majf=0, minf=4097 00:28:31.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:31.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.049 issued rwts: total=8123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.049 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.049 job7: (groupid=0, jobs=1): err= 0: pid=3720437: Mon Jul 22 20:35:41 2024 00:28:31.049 read: IOPS=956, BW=239MiB/s (251MB/s)(2403MiB/10048msec) 00:28:31.049 slat (usec): min=8, max=27083, avg=1036.78, stdev=2505.64 00:28:31.049 clat (msec): min=12, max=140, avg=65.77, stdev=18.88 00:28:31.049 lat (msec): min=12, max=140, avg=66.81, stdev=19.13 00:28:31.049 clat percentiles (msec): 00:28:31.049 | 1.00th=[ 39], 5.00th=[ 43], 10.00th=[ 45], 20.00th=[ 47], 00:28:31.049 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 66], 60.00th=[ 71], 00:28:31.049 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 99], 00:28:31.049 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 131], 99.95th=[ 134], 00:28:31.049 | 99.99th=[ 142] 00:28:31.049 bw ( KiB/s): min=170496, max=359424, per=11.42%, avg=244454.40, stdev=58990.49, samples=20 00:28:31.049 iops : min= 666, max= 1404, avg=954.90, stdev=230.43, samples=20 00:28:31.049 lat (msec) : 20=0.16%, 50=29.36%, 100=66.28%, 250=4.20% 00:28:31.049 cpu : usr=0.36%, sys=3.37%, ctx=1983, majf=0, minf=4097 00:28:31.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:28:31.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.049 issued rwts: total=9612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.049 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.049 job8: (groupid=0, jobs=1): err= 0: pid=3720439: Mon Jul 22 20:35:41 2024 00:28:31.049 read: IOPS=764, BW=191MiB/s (200MB/s)(1917MiB/10026msec) 00:28:31.049 slat (usec): min=7, max=30198, avg=1300.78, stdev=3066.45 00:28:31.049 clat (msec): min=20, max=134, avg=82.29, stdev=20.07 00:28:31.049 lat (msec): min=25, max=147, avg=83.59, stdev=20.33 00:28:31.049 clat percentiles (msec): 00:28:31.049 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 66], 00:28:31.049 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 88], 00:28:31.049 | 70.00th=[ 93], 80.00th=[ 101], 90.00th=[ 109], 95.00th=[ 113], 00:28:31.049 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 130], 99.95th=[ 131], 00:28:31.049 | 99.99th=[ 136] 00:28:31.049 bw ( KiB/s): min=148992, max=324608, per=9.09%, avg=194712.05, stdev=41233.17, samples=20 00:28:31.049 iops : min= 582, max= 1268, avg=760.55, stdev=161.02, samples=20 00:28:31.049 lat (msec) : 50=7.08%, 100=72.64%, 250=20.28% 00:28:31.049 cpu : usr=0.35%, sys=2.75%, ctx=1628, majf=0, minf=4097 00:28:31.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:31.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.049 issued rwts: total=7668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.049 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.049 job9: (groupid=0, jobs=1): err= 0: pid=3720440: Mon Jul 22 20:35:41 2024 00:28:31.049 read: IOPS=632, BW=158MiB/s (166MB/s)(1597MiB/10101msec) 00:28:31.049 slat (usec): min=6, max=74565, avg=1335.69, stdev=3906.98 00:28:31.049 clat (msec): min=7, max=206, avg=99.75, stdev=28.49 00:28:31.049 lat (msec): min=7, max=206, avg=101.08, stdev=28.92 00:28:31.049 clat percentiles (msec): 00:28:31.049 | 1.00th=[ 17], 5.00th=[ 41], 10.00th=[ 51], 20.00th=[ 86], 00:28:31.049 | 30.00th=[ 92], 40.00th=[ 99], 50.00th=[ 104], 60.00th=[ 109], 00:28:31.049 | 70.00th=[ 116], 80.00th=[ 123], 90.00th=[ 130], 95.00th=[ 138], 00:28:31.049 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 197], 99.95th=[ 197], 00:28:31.049 | 99.99th=[ 207] 00:28:31.049 bw ( KiB/s): min=120832, max=254464, per=7.56%, avg=161868.80, stdev=32360.53, samples=20 00:28:31.049 iops : min= 472, max= 994, avg=632.30, stdev=126.41, samples=20 00:28:31.049 lat (msec) : 10=0.05%, 20=1.11%, 50=8.47%, 100=34.85%, 250=55.52% 00:28:31.049 cpu : usr=0.24%, sys=2.01%, ctx=1522, majf=0, minf=4097 00:28:31.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:31.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.049 issued rwts: total=6387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.049 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.049 job10: (groupid=0, jobs=1): err= 0: pid=3720441: Mon Jul 22 20:35:41 2024 00:28:31.049 read: IOPS=600, BW=150MiB/s (157MB/s)(1515MiB/10091msec) 00:28:31.049 slat (usec): min=8, max=44632, avg=1378.81, stdev=3788.06 00:28:31.049 clat (msec): min=11, max=207, avg=105.13, stdev=23.42 00:28:31.049 lat (msec): min=11, max=207, avg=106.51, stdev=23.77 00:28:31.049 clat percentiles (msec): 00:28:31.049 | 1.00th=[ 34], 5.00th=[ 68], 10.00th=[ 79], 20.00th=[ 89], 00:28:31.049 | 30.00th=[ 96], 40.00th=[ 102], 50.00th=[ 107], 60.00th=[ 113], 00:28:31.049 | 70.00th=[ 118], 80.00th=[ 124], 90.00th=[ 131], 95.00th=[ 138], 00:28:31.049 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 192], 99.95th=[ 203], 00:28:31.049 | 99.99th=[ 209] 00:28:31.049 bw ( KiB/s): min=119808, max=201728, per=7.17%, avg=153486.55, stdev=21796.56, samples=20 00:28:31.049 iops : min= 468, max= 788, avg=599.55, stdev=85.15, samples=20 00:28:31.049 lat (msec) : 20=0.23%, 50=2.84%, 100=34.63%, 250=62.30% 00:28:31.049 cpu : usr=0.22%, sys=1.94%, ctx=1560, majf=0, minf=4097 00:28:31.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:31.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.049 issued rwts: total=6058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.049 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.049 00:28:31.049 Run status group 0 (all jobs): 00:28:31.049 READ: bw=2091MiB/s (2193MB/s), 150MiB/s-251MiB/s (157MB/s-264MB/s), io=20.6GiB (22.2GB), run=10016-10104msec 00:28:31.049 00:28:31.049 Disk stats (read/write): 00:28:31.049 nvme0n1: ios=15943/0, merge=0/0, ticks=1222545/0, in_queue=1222545, util=96.47% 00:28:31.049 nvme10n1: ios=12179/0, merge=0/0, ticks=1215194/0, in_queue=1215194, util=96.75% 00:28:31.049 nvme1n1: ios=12863/0, merge=0/0, ticks=1216557/0, in_queue=1216557, util=97.12% 00:28:31.049 nvme2n1: ios=11821/0, merge=0/0, ticks=1211770/0, in_queue=1211770, util=97.31% 00:28:31.049 nvme3n1: ios=19775/0, merge=0/0, ticks=1221846/0, in_queue=1221846, util=97.38% 00:28:31.049 nvme4n1: ios=18589/0, merge=0/0, ticks=1226226/0, in_queue=1226226, util=97.89% 00:28:31.049 nvme5n1: ios=15709/0, merge=0/0, ticks=1219532/0, in_queue=1219532, util=98.05% 00:28:31.049 nvme6n1: ios=18829/0, merge=0/0, ticks=1221939/0, in_queue=1221939, util=98.26% 00:28:31.049 nvme7n1: ios=14873/0, merge=0/0, ticks=1219389/0, in_queue=1219389, util=98.80% 00:28:31.049 nvme8n1: ios=12524/0, merge=0/0, ticks=1219477/0, in_queue=1219477, util=99.04% 00:28:31.049 nvme9n1: ios=11804/0, merge=0/0, ticks=1215831/0, in_queue=1215831, util=99.19% 00:28:31.049 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:28:31.049 [global] 00:28:31.049 thread=1 00:28:31.049 invalidate=1 00:28:31.049 rw=randwrite 00:28:31.049 time_based=1 00:28:31.049 runtime=10 00:28:31.049 ioengine=libaio 00:28:31.049 direct=1 00:28:31.049 bs=262144 00:28:31.049 iodepth=64 00:28:31.049 norandommap=1 00:28:31.049 numjobs=1 00:28:31.049 00:28:31.049 [job0] 00:28:31.049 filename=/dev/nvme0n1 00:28:31.049 [job1] 00:28:31.049 filename=/dev/nvme10n1 00:28:31.049 [job2] 00:28:31.049 filename=/dev/nvme1n1 00:28:31.049 [job3] 00:28:31.049 filename=/dev/nvme2n1 00:28:31.049 [job4] 00:28:31.049 filename=/dev/nvme3n1 00:28:31.049 [job5] 00:28:31.049 filename=/dev/nvme4n1 00:28:31.049 [job6] 00:28:31.049 filename=/dev/nvme5n1 00:28:31.049 [job7] 00:28:31.049 filename=/dev/nvme6n1 00:28:31.049 [job8] 00:28:31.049 filename=/dev/nvme7n1 00:28:31.049 [job9] 00:28:31.049 filename=/dev/nvme8n1 00:28:31.049 [job10] 00:28:31.049 filename=/dev/nvme9n1 00:28:31.049 Could not set queue depth (nvme0n1) 00:28:31.049 Could not set queue depth (nvme10n1) 00:28:31.049 Could not set queue depth (nvme1n1) 00:28:31.050 Could not set queue depth (nvme2n1) 00:28:31.050 Could not set queue depth (nvme3n1) 00:28:31.050 Could not set queue depth (nvme4n1) 00:28:31.050 Could not set queue depth (nvme5n1) 00:28:31.050 Could not set queue depth (nvme6n1) 00:28:31.050 Could not set queue depth (nvme7n1) 00:28:31.050 Could not set queue depth (nvme8n1) 00:28:31.050 Could not set queue depth (nvme9n1) 00:28:31.050 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.050 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.050 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.050 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.050 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.050 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.050 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.050 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.050 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.050 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.050 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:31.050 fio-3.35 00:28:31.050 Starting 11 threads 00:28:41.086 00:28:41.086 job0: (groupid=0, jobs=1): err= 0: pid=3722325: Mon Jul 22 20:35:52 2024 00:28:41.086 write: IOPS=690, BW=173MiB/s (181MB/s)(1745MiB/10109msec); 0 zone resets 00:28:41.086 slat (usec): min=22, max=58886, avg=1281.85, stdev=2881.69 00:28:41.086 clat (msec): min=2, max=235, avg=91.39, stdev=43.70 00:28:41.086 lat (msec): min=2, max=235, avg=92.67, stdev=44.33 00:28:41.086 clat percentiles (msec): 00:28:41.086 | 1.00th=[ 12], 5.00th=[ 28], 10.00th=[ 48], 20.00th=[ 59], 00:28:41.086 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 78], 60.00th=[ 100], 00:28:41.086 | 70.00th=[ 117], 80.00th=[ 140], 90.00th=[ 161], 95.00th=[ 165], 00:28:41.086 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 222], 99.95th=[ 228], 00:28:41.086 | 99.99th=[ 236] 00:28:41.086 bw ( KiB/s): min=92160, max=285184, per=11.33%, avg=177049.60, stdev=66923.34, samples=20 00:28:41.086 iops : min= 360, max= 1114, avg=691.60, stdev=261.42, samples=20 00:28:41.086 lat (msec) : 4=0.16%, 10=0.66%, 20=1.69%, 50=8.07%, 100=52.33% 00:28:41.086 lat (msec) : 250=37.10% 00:28:41.086 cpu : usr=1.65%, sys=2.02%, ctx=2682, majf=0, minf=1 00:28:41.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:41.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:41.086 issued rwts: total=0,6979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.086 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:41.086 job1: (groupid=0, jobs=1): err= 0: pid=3722354: Mon Jul 22 20:35:52 2024 00:28:41.086 write: IOPS=666, BW=167MiB/s (175MB/s)(1684MiB/10110msec); 0 zone resets 00:28:41.086 slat (usec): min=25, max=35965, avg=1337.83, stdev=2748.34 00:28:41.086 clat (msec): min=2, max=233, avg=94.70, stdev=36.52 00:28:41.086 lat (msec): min=2, max=234, avg=96.04, stdev=37.02 00:28:41.086 clat percentiles (msec): 00:28:41.086 | 1.00th=[ 14], 5.00th=[ 42], 10.00th=[ 61], 20.00th=[ 64], 00:28:41.086 | 30.00th=[ 67], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 100], 00:28:41.086 | 70.00th=[ 114], 80.00th=[ 122], 90.00th=[ 142], 95.00th=[ 163], 00:28:41.086 | 99.00th=[ 192], 99.50th=[ 201], 99.90th=[ 220], 99.95th=[ 226], 00:28:41.086 | 99.99th=[ 234] 00:28:41.086 bw ( KiB/s): min=88064, max=253952, per=10.93%, avg=170782.55, stdev=48873.86, samples=20 00:28:41.086 iops : min= 344, max= 992, avg=667.10, stdev=190.90, samples=20 00:28:41.086 lat (msec) : 4=0.10%, 10=0.65%, 20=0.95%, 50=5.84%, 100=54.22% 00:28:41.086 lat (msec) : 250=38.23% 00:28:41.086 cpu : usr=1.38%, sys=2.15%, ctx=2367, majf=0, minf=1 00:28:41.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:41.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:41.086 issued rwts: total=0,6735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.086 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:41.086 job2: (groupid=0, jobs=1): err= 0: pid=3722372: Mon Jul 22 20:35:52 2024 00:28:41.086 write: IOPS=527, BW=132MiB/s (138MB/s)(1333MiB/10114msec); 0 zone resets 00:28:41.086 slat (usec): min=23, max=44849, avg=1780.66, stdev=3574.72 00:28:41.086 clat (msec): min=4, max=230, avg=119.60, stdev=43.55 00:28:41.086 lat (msec): min=6, max=230, avg=121.38, stdev=44.09 00:28:41.086 clat percentiles (msec): 00:28:41.086 | 1.00th=[ 24], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 71], 00:28:41.086 | 30.00th=[ 77], 40.00th=[ 116], 50.00th=[ 126], 60.00th=[ 146], 00:28:41.086 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 180], 00:28:41.086 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 224], 99.95th=[ 224], 00:28:41.086 | 99.99th=[ 230] 00:28:41.086 bw ( KiB/s): min=90112, max=254976, per=8.63%, avg=134860.80, stdev=47615.39, samples=20 00:28:41.086 iops : min= 352, max= 996, avg=526.80, stdev=186.00, samples=20 00:28:41.086 lat (msec) : 10=0.09%, 20=0.58%, 50=2.12%, 100=32.40%, 250=64.81% 00:28:41.086 cpu : usr=1.29%, sys=1.61%, ctx=1687, majf=0, minf=1 00:28:41.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:41.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:41.086 issued rwts: total=0,5331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.086 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:41.086 job3: (groupid=0, jobs=1): err= 0: pid=3722385: Mon Jul 22 20:35:52 2024 00:28:41.086 write: IOPS=423, BW=106MiB/s (111MB/s)(1069MiB/10100msec); 0 zone resets 00:28:41.086 slat (usec): min=25, max=32945, avg=2198.18, stdev=4069.24 00:28:41.086 clat (msec): min=4, max=215, avg=148.89, stdev=23.27 00:28:41.087 lat (msec): min=4, max=215, avg=151.09, stdev=23.35 00:28:41.087 clat percentiles (msec): 00:28:41.087 | 1.00th=[ 70], 5.00th=[ 117], 10.00th=[ 127], 20.00th=[ 133], 00:28:41.087 | 30.00th=[ 138], 40.00th=[ 142], 50.00th=[ 150], 60.00th=[ 157], 00:28:41.087 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 190], 00:28:41.087 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 207], 99.95th=[ 207], 00:28:41.087 | 99.99th=[ 215] 00:28:41.087 bw ( KiB/s): min=83968, max=126976, per=6.90%, avg=107852.80, stdev=11940.95, samples=20 00:28:41.087 iops : min= 328, max= 496, avg=421.30, stdev=46.64, samples=20 00:28:41.087 lat (msec) : 10=0.09%, 20=0.09%, 50=0.56%, 100=1.05%, 250=98.20% 00:28:41.087 cpu : usr=1.06%, sys=1.22%, ctx=1317, majf=0, minf=1 00:28:41.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:28:41.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:41.087 issued rwts: total=0,4276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:41.087 job4: (groupid=0, jobs=1): err= 0: pid=3722391: Mon Jul 22 20:35:52 2024 00:28:41.087 write: IOPS=421, BW=105MiB/s (111MB/s)(1065MiB/10102msec); 0 zone resets 00:28:41.087 slat (usec): min=29, max=62065, avg=2343.40, stdev=4169.91 00:28:41.087 clat (msec): min=64, max=212, avg=149.33, stdev=17.72 00:28:41.087 lat (msec): min=64, max=212, avg=151.68, stdev=17.53 00:28:41.087 clat percentiles (msec): 00:28:41.087 | 1.00th=[ 103], 5.00th=[ 123], 10.00th=[ 129], 20.00th=[ 136], 00:28:41.087 | 30.00th=[ 140], 40.00th=[ 144], 50.00th=[ 153], 60.00th=[ 157], 00:28:41.087 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 178], 00:28:41.087 | 99.00th=[ 192], 99.50th=[ 197], 99.90th=[ 205], 99.95th=[ 205], 00:28:41.087 | 99.99th=[ 213] 00:28:41.087 bw ( KiB/s): min=90112, max=126976, per=6.87%, avg=107443.20, stdev=11236.13, samples=20 00:28:41.087 iops : min= 352, max= 496, avg=419.70, stdev=43.89, samples=20 00:28:41.087 lat (msec) : 100=0.77%, 250=99.23% 00:28:41.087 cpu : usr=1.08%, sys=1.27%, ctx=1116, majf=0, minf=1 00:28:41.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:41.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:41.087 issued rwts: total=0,4260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:41.087 job5: (groupid=0, jobs=1): err= 0: pid=3722403: Mon Jul 22 20:35:52 2024 00:28:41.087 write: IOPS=521, BW=130MiB/s (137MB/s)(1312MiB/10054msec); 0 zone resets 00:28:41.087 slat (usec): min=22, max=70051, avg=1775.18, stdev=3441.55 00:28:41.087 clat (msec): min=3, max=192, avg=120.75, stdev=29.97 00:28:41.087 lat (msec): min=5, max=193, avg=122.52, stdev=30.28 00:28:41.087 clat percentiles (msec): 00:28:41.087 | 1.00th=[ 20], 5.00th=[ 59], 10.00th=[ 92], 20.00th=[ 99], 00:28:41.087 | 30.00th=[ 114], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 129], 00:28:41.087 | 70.00th=[ 136], 80.00th=[ 142], 90.00th=[ 150], 95.00th=[ 169], 00:28:41.087 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 192], 99.95th=[ 192], 00:28:41.087 | 99.99th=[ 192] 00:28:41.087 bw ( KiB/s): min=92160, max=178688, per=8.49%, avg=132684.80, stdev=22841.40, samples=20 00:28:41.087 iops : min= 360, max= 698, avg=518.30, stdev=89.22, samples=20 00:28:41.087 lat (msec) : 4=0.04%, 10=0.38%, 20=0.61%, 50=2.99%, 100=18.78% 00:28:41.087 lat (msec) : 250=77.20% 00:28:41.087 cpu : usr=1.24%, sys=1.56%, ctx=1722, majf=0, minf=1 00:28:41.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:41.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:41.087 issued rwts: total=0,5246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:41.087 job6: (groupid=0, jobs=1): err= 0: pid=3722414: Mon Jul 22 20:35:52 2024 00:28:41.087 write: IOPS=535, BW=134MiB/s (140MB/s)(1355MiB/10111msec); 0 zone resets 00:28:41.087 slat (usec): min=20, max=44718, avg=1764.34, stdev=3420.27 00:28:41.087 clat (msec): min=2, max=232, avg=117.63, stdev=35.58 00:28:41.087 lat (msec): min=3, max=232, avg=119.40, stdev=36.06 00:28:41.087 clat percentiles (msec): 00:28:41.087 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 90], 20.00th=[ 97], 00:28:41.087 | 30.00th=[ 111], 40.00th=[ 116], 50.00th=[ 121], 60.00th=[ 122], 00:28:41.087 | 70.00th=[ 128], 80.00th=[ 144], 90.00th=[ 161], 95.00th=[ 171], 00:28:41.087 | 99.00th=[ 197], 99.50th=[ 199], 99.90th=[ 226], 99.95th=[ 226], 00:28:41.087 | 99.99th=[ 232] 00:28:41.087 bw ( KiB/s): min=90112, max=180224, per=8.77%, avg=137088.00, stdev=24405.80, samples=20 00:28:41.087 iops : min= 352, max= 704, avg=535.50, stdev=95.34, samples=20 00:28:41.087 lat (msec) : 4=0.07%, 10=0.79%, 20=1.99%, 50=3.67%, 100=19.51% 00:28:41.087 lat (msec) : 250=73.96% 00:28:41.087 cpu : usr=1.29%, sys=1.55%, ctx=1779, majf=0, minf=1 00:28:41.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:41.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:41.087 issued rwts: total=0,5418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:41.087 job7: (groupid=0, jobs=1): err= 0: pid=3722424: Mon Jul 22 20:35:52 2024 00:28:41.087 write: IOPS=657, BW=164MiB/s (172MB/s)(1662MiB/10113msec); 0 zone resets 00:28:41.087 slat (usec): min=24, max=25832, avg=1389.15, stdev=2605.79 00:28:41.087 clat (msec): min=4, max=229, avg=95.91, stdev=25.35 00:28:41.087 lat (msec): min=4, max=230, avg=97.30, stdev=25.61 00:28:41.087 clat percentiles (msec): 00:28:41.087 | 1.00th=[ 23], 5.00th=[ 67], 10.00th=[ 71], 20.00th=[ 75], 00:28:41.087 | 30.00th=[ 82], 40.00th=[ 92], 50.00th=[ 96], 60.00th=[ 100], 00:28:41.087 | 70.00th=[ 101], 80.00th=[ 118], 90.00th=[ 124], 95.00th=[ 144], 00:28:41.087 | 99.00th=[ 167], 99.50th=[ 184], 99.90th=[ 215], 99.95th=[ 224], 00:28:41.087 | 99.99th=[ 230] 00:28:41.087 bw ( KiB/s): min=114688, max=220160, per=10.78%, avg=168576.00, stdev=31006.05, samples=20 00:28:41.087 iops : min= 448, max= 860, avg=658.50, stdev=121.12, samples=20 00:28:41.087 lat (msec) : 10=0.11%, 20=0.71%, 50=2.08%, 100=63.73%, 250=33.38% 00:28:41.087 cpu : usr=1.61%, sys=2.22%, ctx=2112, majf=0, minf=1 00:28:41.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:41.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:41.087 issued rwts: total=0,6648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:41.087 job8: (groupid=0, jobs=1): err= 0: pid=3722450: Mon Jul 22 20:35:52 2024 00:28:41.087 write: IOPS=589, BW=147MiB/s (155MB/s)(1491MiB/10113msec); 0 zone resets 00:28:41.087 slat (usec): min=22, max=49368, avg=1538.51, stdev=3159.87 00:28:41.087 clat (msec): min=9, max=223, avg=106.98, stdev=37.50 00:28:41.087 lat (msec): min=9, max=223, avg=108.52, stdev=38.01 00:28:41.087 clat percentiles (msec): 00:28:41.087 | 1.00th=[ 29], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 73], 00:28:41.087 | 30.00th=[ 92], 40.00th=[ 97], 50.00th=[ 101], 60.00th=[ 105], 00:28:41.087 | 70.00th=[ 130], 80.00th=[ 144], 90.00th=[ 161], 95.00th=[ 171], 00:28:41.087 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 215], 99.95th=[ 215], 00:28:41.087 | 99.99th=[ 224] 00:28:41.087 bw ( KiB/s): min=96256, max=227840, per=9.66%, avg=151014.40, stdev=38315.46, samples=20 00:28:41.087 iops : min= 376, max= 890, avg=589.90, stdev=149.67, samples=20 00:28:41.087 lat (msec) : 10=0.02%, 20=0.39%, 50=4.19%, 100=46.78%, 250=48.62% 00:28:41.087 cpu : usr=1.40%, sys=1.80%, ctx=2085, majf=0, minf=1 00:28:41.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:28:41.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:41.087 issued rwts: total=0,5962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:41.087 job9: (groupid=0, jobs=1): err= 0: pid=3722462: Mon Jul 22 20:35:52 2024 00:28:41.087 write: IOPS=557, BW=139MiB/s (146MB/s)(1409MiB/10114msec); 0 zone resets 00:28:41.087 slat (usec): min=23, max=19666, avg=1627.99, stdev=3157.99 00:28:41.087 clat (msec): min=4, max=228, avg=113.20, stdev=33.20 00:28:41.087 lat (msec): min=5, max=228, avg=114.83, stdev=33.70 00:28:41.087 clat percentiles (msec): 00:28:41.087 | 1.00th=[ 18], 5.00th=[ 40], 10.00th=[ 66], 20.00th=[ 93], 00:28:41.087 | 30.00th=[ 101], 40.00th=[ 115], 50.00th=[ 122], 60.00th=[ 123], 00:28:41.087 | 70.00th=[ 127], 80.00th=[ 140], 90.00th=[ 150], 95.00th=[ 159], 00:28:41.087 | 99.00th=[ 171], 99.50th=[ 171], 99.90th=[ 220], 99.95th=[ 220], 00:28:41.087 | 99.99th=[ 228] 00:28:41.087 bw ( KiB/s): min=106496, max=217600, per=9.13%, avg=142643.20, stdev=34541.24, samples=20 00:28:41.087 iops : min= 416, max= 850, avg=557.20, stdev=134.93, samples=20 00:28:41.087 lat (msec) : 10=0.25%, 20=1.31%, 50=5.22%, 100=23.18%, 250=70.04% 00:28:41.087 cpu : usr=1.17%, sys=1.86%, ctx=2036, majf=0, minf=1 00:28:41.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:41.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:41.087 issued rwts: total=0,5635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:41.087 job10: (groupid=0, jobs=1): err= 0: pid=3722472: Mon Jul 22 20:35:52 2024 00:28:41.087 write: IOPS=520, BW=130MiB/s (136MB/s)(1316MiB/10111msec); 0 zone resets 00:28:41.087 slat (usec): min=26, max=26140, avg=1840.02, stdev=3493.22 00:28:41.087 clat (msec): min=6, max=231, avg=121.02, stdev=41.21 00:28:41.087 lat (msec): min=7, max=231, avg=122.86, stdev=41.77 00:28:41.088 clat percentiles (msec): 00:28:41.088 | 1.00th=[ 28], 5.00th=[ 68], 10.00th=[ 72], 20.00th=[ 77], 00:28:41.088 | 30.00th=[ 92], 40.00th=[ 100], 50.00th=[ 122], 60.00th=[ 150], 00:28:41.088 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 171], 00:28:41.088 | 99.00th=[ 194], 99.50th=[ 199], 99.90th=[ 224], 99.95th=[ 224], 00:28:41.088 | 99.99th=[ 232] 00:28:41.088 bw ( KiB/s): min=94208, max=228352, per=8.52%, avg=133145.60, stdev=44533.46, samples=20 00:28:41.088 iops : min= 368, max= 892, avg=520.10, stdev=173.96, samples=20 00:28:41.088 lat (msec) : 10=0.17%, 20=0.49%, 50=2.68%, 100=38.22%, 250=58.43% 00:28:41.088 cpu : usr=1.06%, sys=1.50%, ctx=1573, majf=0, minf=1 00:28:41.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:41.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:41.088 issued rwts: total=0,5264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:41.088 00:28:41.088 Run status group 0 (all jobs): 00:28:41.088 WRITE: bw=1526MiB/s (1601MB/s), 105MiB/s-173MiB/s (111MB/s-181MB/s), io=15.1GiB (16.2GB), run=10054-10114msec 00:28:41.088 00:28:41.088 Disk stats (read/write): 00:28:41.088 nvme0n1: ios=49/13936, merge=0/0, ticks=91/1232669, in_queue=1232760, util=96.87% 00:28:41.088 nvme10n1: ios=48/13447, merge=0/0, ticks=113/1231866, in_queue=1231979, util=97.37% 00:28:41.088 nvme1n1: ios=13/10633, merge=0/0, ticks=26/1229074, in_queue=1229100, util=97.07% 00:28:41.088 nvme2n1: ios=45/8539, merge=0/0, ticks=851/1230540, in_queue=1231391, util=99.87% 00:28:41.088 nvme3n1: ios=44/8504, merge=0/0, ticks=1533/1227480, in_queue=1229013, util=99.90% 00:28:41.088 nvme4n1: ios=49/9993, merge=0/0, ticks=1127/1197727, in_queue=1198854, util=99.87% 00:28:41.088 nvme5n1: ios=0/10814, merge=0/0, ticks=0/1229339, in_queue=1229339, util=98.02% 00:28:41.088 nvme6n1: ios=47/13267, merge=0/0, ticks=1365/1229196, in_queue=1230561, util=99.88% 00:28:41.088 nvme7n1: ios=0/11896, merge=0/0, ticks=0/1231134, in_queue=1231134, util=98.67% 00:28:41.088 nvme8n1: ios=0/11242, merge=0/0, ticks=0/1231023, in_queue=1231023, util=98.92% 00:28:41.088 nvme9n1: ios=44/10502, merge=0/0, ticks=671/1228756, in_queue=1229427, util=99.94% 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:41.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:41.088 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:41.347 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:41.347 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:41.347 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:41.347 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:41.607 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:28:41.607 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:41.607 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:28:41.607 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:41.607 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:41.607 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.607 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:41.607 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.607 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:41.607 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:42.175 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:42.175 20:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:42.434 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:42.434 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:42.693 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:42.693 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:42.693 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:42.693 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:42.693 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:28:42.953 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:42.953 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:28:42.953 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:42.953 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:42.953 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.953 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:42.953 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.953 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:42.953 20:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:43.212 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:43.212 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:43.783 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:43.783 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:44.043 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:44.043 20:35:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:44.302 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:44.302 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:44.561 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:44.562 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:44.562 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:44.562 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:44.562 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:28:44.821 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:44.821 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:28:44.821 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:44.821 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:44.821 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.821 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.821 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.821 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:44.821 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:45.080 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:45.080 rmmod nvme_tcp 00:28:45.080 rmmod nvme_fabrics 00:28:45.080 rmmod nvme_keyring 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3711688 ']' 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3711688 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 3711688 ']' 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 3711688 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:45.080 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3711688 00:28:45.080 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:45.080 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:45.080 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3711688' 00:28:45.080 killing process with pid 3711688 00:28:45.080 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 3711688 00:28:45.080 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 3711688 00:28:47.622 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:47.622 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:47.622 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:47.622 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:47.622 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:47.622 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.622 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.622 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:49.534 00:28:49.534 real 1m21.686s 00:28:49.534 user 5m11.441s 00:28:49.534 sys 0m21.728s 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:49.534 ************************************ 00:28:49.534 END TEST nvmf_multiconnection 00:28:49.534 ************************************ 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:49.534 ************************************ 00:28:49.534 START TEST nvmf_initiator_timeout 00:28:49.534 ************************************ 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:49.534 * Looking for test storage... 00:28:49.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:28:49.534 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.670 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.670 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:28:57.670 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:57.670 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:57.670 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:57.670 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:57.670 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:57.670 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:57.671 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:57.671 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:57.671 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:57.671 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:57.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:28:57.671 00:28:57.671 --- 10.0.0.2 ping statistics --- 00:28:57.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.671 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:28:57.671 00:28:57.671 --- 10.0.0.1 ping statistics --- 00:28:57.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.671 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:57.671 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3729709 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3729709 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 3729709 ']' 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.672 20:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.672 [2024-07-22 20:36:08.734338] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:57.672 [2024-07-22 20:36:08.734459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.672 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.672 [2024-07-22 20:36:08.869825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:57.672 [2024-07-22 20:36:09.055108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.672 [2024-07-22 20:36:09.055148] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.672 [2024-07-22 20:36:09.055161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.672 [2024-07-22 20:36:09.055170] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.672 [2024-07-22 20:36:09.055180] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.672 [2024-07-22 20:36:09.055314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.672 [2024-07-22 20:36:09.055414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.672 [2024-07-22 20:36:09.055456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.672 [2024-07-22 20:36:09.055490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.672 Malloc0 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.672 Delay0 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.672 [2024-07-22 20:36:09.584367] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.672 [2024-07-22 20:36:09.612634] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.672 20:36:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:59.585 20:36:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:59.585 20:36:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:28:59.585 20:36:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:59.585 20:36:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:59.585 20:36:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:29:01.523 20:36:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:01.523 20:36:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:01.523 20:36:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:29:01.523 20:36:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:01.523 20:36:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:01.523 20:36:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:29:01.523 20:36:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3731135 00:29:01.523 20:36:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:29:01.523 20:36:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:29:01.523 [global] 00:29:01.523 thread=1 00:29:01.523 invalidate=1 00:29:01.523 rw=write 00:29:01.523 time_based=1 00:29:01.523 runtime=60 00:29:01.523 ioengine=libaio 00:29:01.523 direct=1 00:29:01.523 bs=4096 00:29:01.523 iodepth=1 00:29:01.523 norandommap=0 00:29:01.523 numjobs=1 00:29:01.523 00:29:01.523 verify_dump=1 00:29:01.523 verify_backlog=512 00:29:01.523 verify_state_save=0 00:29:01.523 do_verify=1 00:29:01.523 verify=crc32c-intel 00:29:01.523 [job0] 00:29:01.523 filename=/dev/nvme0n1 00:29:01.523 Could not set queue depth (nvme0n1) 00:29:01.783 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:01.783 fio-3.35 00:29:01.783 Starting 1 thread 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:04.328 true 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:04.328 true 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:04.328 true 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:04.328 true 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.328 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.672 true 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.672 true 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.672 true 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.672 true 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:29:07.672 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3731135 00:30:03.941 00:30:03.941 job0: (groupid=0, jobs=1): err= 0: pid=3731302: Mon Jul 22 20:37:13 2024 00:30:03.941 read: IOPS=25, BW=102KiB/s (104kB/s)(6096KiB/60019msec) 00:30:03.941 slat (usec): min=7, max=6535, avg=29.41, stdev=166.88 00:30:03.941 clat (usec): min=801, max=41684k, avg=38565.43, stdev=1067623.96 00:30:03.941 lat (usec): min=828, max=41684k, avg=38594.84, stdev=1067623.97 00:30:03.941 clat percentiles (usec): 00:30:03.941 | 1.00th=[ 988], 5.00th=[ 1057], 10.00th=[ 1090], 00:30:03.941 | 20.00th=[ 1123], 30.00th=[ 1139], 40.00th=[ 1156], 00:30:03.941 | 50.00th=[ 1172], 60.00th=[ 1188], 70.00th=[ 1221], 00:30:03.941 | 80.00th=[ 41681], 90.00th=[ 42206], 95.00th=[ 42206], 00:30:03.941 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 44827], 00:30:03.941 | 99.95th=[17112761], 99.99th=[17112761] 00:30:03.941 write: IOPS=25, BW=102KiB/s (105kB/s)(6144KiB/60019msec); 0 zone resets 00:30:03.941 slat (usec): min=9, max=32439, avg=70.75, stdev=1125.12 00:30:03.941 clat (usec): min=431, max=1193, avg=694.74, stdev=91.06 00:30:03.941 lat (usec): min=442, max=33213, avg=765.49, stdev=1132.23 00:30:03.941 clat percentiles (usec): 00:30:03.941 | 1.00th=[ 457], 5.00th=[ 529], 10.00th=[ 578], 20.00th=[ 627], 00:30:03.941 | 30.00th=[ 652], 40.00th=[ 676], 50.00th=[ 701], 60.00th=[ 725], 00:30:03.941 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 799], 95.00th=[ 824], 00:30:03.941 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 955], 99.95th=[ 1188], 00:30:03.941 | 99.99th=[ 1188] 00:30:03.941 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=3 00:30:03.941 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=3 00:30:03.941 lat (usec) : 500=1.47%, 750=32.48%, 1000=17.03% 00:30:03.941 lat (msec) : 2=36.70%, 50=12.29%, >=2000=0.03% 00:30:03.941 cpu : usr=0.11%, sys=0.18%, ctx=3066, majf=0, minf=1 00:30:03.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:03.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.941 issued rwts: total=1524,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:03.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:03.941 00:30:03.941 Run status group 0 (all jobs): 00:30:03.941 READ: bw=102KiB/s (104kB/s), 102KiB/s-102KiB/s (104kB/s-104kB/s), io=6096KiB (6242kB), run=60019-60019msec 00:30:03.941 WRITE: bw=102KiB/s (105kB/s), 102KiB/s-102KiB/s (105kB/s-105kB/s), io=6144KiB (6291kB), run=60019-60019msec 00:30:03.941 00:30:03.941 Disk stats (read/write): 00:30:03.941 nvme0n1: ios=1575/1536, merge=0/0, ticks=18711/918, in_queue=19629, util=99.95% 00:30:03.941 20:37:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:03.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:30:03.941 nvmf hotplug test: fio successful as expected 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:03.941 rmmod nvme_tcp 00:30:03.941 rmmod nvme_fabrics 00:30:03.941 rmmod nvme_keyring 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3729709 ']' 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3729709 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 3729709 ']' 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 3729709 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3729709 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3729709' 00:30:03.941 killing process with pid 3729709 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 3729709 00:30:03.941 20:37:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 3729709 00:30:03.941 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:03.941 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:03.941 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:03.941 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:03.941 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:03.941 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.941 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.941 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.327 20:37:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:05.327 00:30:05.327 real 1m15.822s 00:30:05.327 user 4m39.493s 00:30:05.327 sys 0m7.103s 00:30:05.327 20:37:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:05.327 20:37:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:05.327 ************************************ 00:30:05.327 END TEST nvmf_initiator_timeout 00:30:05.327 ************************************ 00:30:05.327 20:37:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:30:05.327 20:37:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:30:05.327 20:37:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:30:05.327 20:37:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:30:05.327 20:37:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:30:05.327 20:37:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:11.919 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:11.920 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:11.920 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:11.920 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:11.920 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:11.920 20:37:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:12.182 ************************************ 00:30:12.182 START TEST nvmf_perf_adq 00:30:12.182 ************************************ 00:30:12.182 20:37:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:12.182 * Looking for test storage... 00:30:12.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:12.182 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:18.774 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:18.774 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.774 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:18.775 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:18.775 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:30:18.775 20:37:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:30:20.688 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:30:22.644 20:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:27.956 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:27.956 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:27.956 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:27.956 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.956 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:27.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:30:27.957 00:30:27.957 --- 10.0.0.2 ping statistics --- 00:30:27.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.957 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:30:27.957 00:30:27.957 --- 10.0.0.1 ping statistics --- 00:30:27.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.957 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3752282 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3752282 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3752282 ']' 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:27.957 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:27.957 [2024-07-22 20:37:39.885374] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:27.957 [2024-07-22 20:37:39.885509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.957 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.218 [2024-07-22 20:37:40.022902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:28.218 [2024-07-22 20:37:40.211260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.218 [2024-07-22 20:37:40.211306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.218 [2024-07-22 20:37:40.211320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.218 [2024-07-22 20:37:40.211329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.218 [2024-07-22 20:37:40.211339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.218 [2024-07-22 20:37:40.211476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.218 [2024-07-22 20:37:40.211565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.218 [2024-07-22 20:37:40.211620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.218 [2024-07-22 20:37:40.211654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.790 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:29.051 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.051 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:30:29.051 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.051 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:29.051 [2024-07-22 20:37:40.962004] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.051 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.051 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:29.051 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.051 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:29.051 Malloc1 00:30:29.051 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.051 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:29.052 [2024-07-22 20:37:41.058565] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3752638 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:30:29.052 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:29.312 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.223 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:30:31.223 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.223 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:31.223 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.223 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:30:31.223 "tick_rate": 2400000000, 00:30:31.223 "poll_groups": [ 00:30:31.223 { 00:30:31.223 "name": "nvmf_tgt_poll_group_000", 00:30:31.223 "admin_qpairs": 1, 00:30:31.223 "io_qpairs": 1, 00:30:31.223 "current_admin_qpairs": 1, 00:30:31.223 "current_io_qpairs": 1, 00:30:31.223 "pending_bdev_io": 0, 00:30:31.223 "completed_nvme_io": 20231, 00:30:31.223 "transports": [ 00:30:31.223 { 00:30:31.223 "trtype": "TCP" 00:30:31.223 } 00:30:31.223 ] 00:30:31.223 }, 00:30:31.223 { 00:30:31.223 "name": "nvmf_tgt_poll_group_001", 00:30:31.223 "admin_qpairs": 0, 00:30:31.223 "io_qpairs": 1, 00:30:31.223 "current_admin_qpairs": 0, 00:30:31.223 "current_io_qpairs": 1, 00:30:31.223 "pending_bdev_io": 0, 00:30:31.223 "completed_nvme_io": 27961, 00:30:31.223 "transports": [ 00:30:31.223 { 00:30:31.223 "trtype": "TCP" 00:30:31.223 } 00:30:31.223 ] 00:30:31.223 }, 00:30:31.223 { 00:30:31.223 "name": "nvmf_tgt_poll_group_002", 00:30:31.223 "admin_qpairs": 0, 00:30:31.223 "io_qpairs": 1, 00:30:31.223 "current_admin_qpairs": 0, 00:30:31.223 "current_io_qpairs": 1, 00:30:31.223 "pending_bdev_io": 0, 00:30:31.223 "completed_nvme_io": 21102, 00:30:31.223 "transports": [ 00:30:31.223 { 00:30:31.223 "trtype": "TCP" 00:30:31.223 } 00:30:31.223 ] 00:30:31.223 }, 00:30:31.223 { 00:30:31.223 "name": "nvmf_tgt_poll_group_003", 00:30:31.223 "admin_qpairs": 0, 00:30:31.223 "io_qpairs": 1, 00:30:31.223 "current_admin_qpairs": 0, 00:30:31.223 "current_io_qpairs": 1, 00:30:31.223 "pending_bdev_io": 0, 00:30:31.223 "completed_nvme_io": 20396, 00:30:31.223 "transports": [ 00:30:31.223 { 00:30:31.223 "trtype": "TCP" 00:30:31.223 } 00:30:31.223 ] 00:30:31.223 } 00:30:31.223 ] 00:30:31.223 }' 00:30:31.223 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:30:31.223 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:30:31.223 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:30:31.223 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:30:31.223 20:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3752638 00:30:39.359 Initializing NVMe Controllers 00:30:39.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:39.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:39.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:39.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:39.359 Initialization complete. Launching workers. 00:30:39.359 ======================================================== 00:30:39.359 Latency(us) 00:30:39.359 Device Information : IOPS MiB/s Average min max 00:30:39.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14035.08 54.82 4569.11 1205.59 45325.07 00:30:39.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 16010.52 62.54 3996.91 1390.57 9619.70 00:30:39.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13724.39 53.61 4677.09 1160.38 44768.87 00:30:39.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11596.35 45.30 5518.65 1310.46 10171.54 00:30:39.359 ======================================================== 00:30:39.359 Total : 55366.34 216.27 4629.29 1160.38 45325.07 00:30:39.359 00:30:39.359 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:30:39.359 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:39.359 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:30:39.359 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:39.359 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:30:39.359 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:39.359 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:39.359 rmmod nvme_tcp 00:30:39.359 rmmod nvme_fabrics 00:30:39.359 rmmod nvme_keyring 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3752282 ']' 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3752282 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3752282 ']' 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3752282 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3752282 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3752282' 00:30:39.620 killing process with pid 3752282 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3752282 00:30:39.620 20:37:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3752282 00:30:40.563 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:40.563 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:40.563 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:40.563 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:40.563 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:40.563 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.563 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.563 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.478 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:42.478 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:30:42.478 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:30:43.865 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:30:45.780 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:51.142 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:51.143 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:51.143 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:51.143 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:51.143 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:51.143 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:51.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:30:51.143 00:30:51.143 --- 10.0.0.2 ping statistics --- 00:30:51.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.143 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.480 ms 00:30:51.143 00:30:51.143 --- 10.0.0.1 ping statistics --- 00:30:51.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.143 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:30:51.143 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:30:51.405 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:30:51.405 net.core.busy_poll = 1 00:30:51.405 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:30:51.405 net.core.busy_read = 1 00:30:51.405 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:30:51.405 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:30:51.405 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:30:51.405 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:30:51.405 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3757274 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3757274 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3757274 ']' 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:51.666 20:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:51.666 [2024-07-22 20:38:03.532632] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:51.666 [2024-07-22 20:38:03.532761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.666 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.666 [2024-07-22 20:38:03.667795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.927 [2024-07-22 20:38:03.850869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.927 [2024-07-22 20:38:03.850913] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.927 [2024-07-22 20:38:03.850926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.927 [2024-07-22 20:38:03.850935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.927 [2024-07-22 20:38:03.850945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.927 [2024-07-22 20:38:03.851121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.927 [2024-07-22 20:38:03.851215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.927 [2024-07-22 20:38:03.851326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.927 [2024-07-22 20:38:03.851351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.500 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:52.791 [2024-07-22 20:38:04.624993] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:52.791 Malloc1 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:52.791 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.792 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:52.792 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.792 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:52.792 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.792 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.792 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.792 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:52.792 [2024-07-22 20:38:04.721730] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.792 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.792 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3757466 00:30:52.792 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:30:52.792 20:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:52.792 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.338 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:30:55.338 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.338 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:55.338 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.338 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:30:55.338 "tick_rate": 2400000000, 00:30:55.338 "poll_groups": [ 00:30:55.338 { 00:30:55.338 "name": "nvmf_tgt_poll_group_000", 00:30:55.338 "admin_qpairs": 1, 00:30:55.338 "io_qpairs": 2, 00:30:55.338 "current_admin_qpairs": 1, 00:30:55.338 "current_io_qpairs": 2, 00:30:55.338 "pending_bdev_io": 0, 00:30:55.338 "completed_nvme_io": 26524, 00:30:55.338 "transports": [ 00:30:55.338 { 00:30:55.338 "trtype": "TCP" 00:30:55.338 } 00:30:55.338 ] 00:30:55.338 }, 00:30:55.338 { 00:30:55.338 "name": "nvmf_tgt_poll_group_001", 00:30:55.338 "admin_qpairs": 0, 00:30:55.338 "io_qpairs": 2, 00:30:55.338 "current_admin_qpairs": 0, 00:30:55.338 "current_io_qpairs": 2, 00:30:55.338 "pending_bdev_io": 0, 00:30:55.338 "completed_nvme_io": 37207, 00:30:55.338 "transports": [ 00:30:55.338 { 00:30:55.338 "trtype": "TCP" 00:30:55.338 } 00:30:55.338 ] 00:30:55.338 }, 00:30:55.338 { 00:30:55.339 "name": "nvmf_tgt_poll_group_002", 00:30:55.339 "admin_qpairs": 0, 00:30:55.339 "io_qpairs": 0, 00:30:55.339 "current_admin_qpairs": 0, 00:30:55.339 "current_io_qpairs": 0, 00:30:55.339 "pending_bdev_io": 0, 00:30:55.339 "completed_nvme_io": 0, 00:30:55.339 "transports": [ 00:30:55.339 { 00:30:55.339 "trtype": "TCP" 00:30:55.339 } 00:30:55.339 ] 00:30:55.339 }, 00:30:55.339 { 00:30:55.339 "name": "nvmf_tgt_poll_group_003", 00:30:55.339 "admin_qpairs": 0, 00:30:55.339 "io_qpairs": 0, 00:30:55.339 "current_admin_qpairs": 0, 00:30:55.339 "current_io_qpairs": 0, 00:30:55.339 "pending_bdev_io": 0, 00:30:55.339 "completed_nvme_io": 0, 00:30:55.339 "transports": [ 00:30:55.339 { 00:30:55.339 "trtype": "TCP" 00:30:55.339 } 00:30:55.339 ] 00:30:55.339 } 00:30:55.339 ] 00:30:55.339 }' 00:30:55.339 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:30:55.339 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:30:55.339 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:30:55.339 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:30:55.339 20:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3757466 00:31:03.477 Initializing NVMe Controllers 00:31:03.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:31:03.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:31:03.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:31:03.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:31:03.477 Initialization complete. Launching workers. 00:31:03.477 ======================================================== 00:31:03.477 Latency(us) 00:31:03.477 Device Information : IOPS MiB/s Average min max 00:31:03.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10321.10 40.32 6201.33 1260.68 50952.74 00:31:03.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7722.80 30.17 8287.34 1494.48 51748.63 00:31:03.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7733.50 30.21 8274.77 1530.16 52378.73 00:31:03.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12107.40 47.29 5285.82 1059.62 50322.46 00:31:03.477 ======================================================== 00:31:03.477 Total : 37884.80 147.99 6757.24 1059.62 52378.73 00:31:03.477 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:03.477 rmmod nvme_tcp 00:31:03.477 rmmod nvme_fabrics 00:31:03.477 rmmod nvme_keyring 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3757274 ']' 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3757274 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3757274 ']' 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3757274 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:03.477 20:38:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3757274 00:31:03.477 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:03.477 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:03.477 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3757274' 00:31:03.477 killing process with pid 3757274 00:31:03.477 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3757274 00:31:03.477 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3757274 00:31:04.048 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:04.048 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:04.048 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:04.048 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:04.048 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:04.048 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.048 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.048 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:06.595 00:31:06.595 real 0m54.098s 00:31:06.595 user 2m52.981s 00:31:06.595 sys 0m11.681s 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:06.595 ************************************ 00:31:06.595 END TEST nvmf_perf_adq 00:31:06.595 ************************************ 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:06.595 ************************************ 00:31:06.595 START TEST nvmf_shutdown 00:31:06.595 ************************************ 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:06.595 * Looking for test storage... 00:31:06.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.595 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:06.596 ************************************ 00:31:06.596 START TEST nvmf_shutdown_tc1 00:31:06.596 ************************************ 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:06.596 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:13.188 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:13.188 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:13.188 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.188 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:13.188 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.189 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:13.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:31:13.450 00:31:13.450 --- 10.0.0.2 ping statistics --- 00:31:13.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.450 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.472 ms 00:31:13.450 00:31:13.450 --- 10.0.0.1 ping statistics --- 00:31:13.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.450 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:13.450 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3763914 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3763914 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3763914 ']' 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:13.712 20:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:13.712 [2024-07-22 20:38:25.579408] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:13.712 [2024-07-22 20:38:25.579538] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.712 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.712 [2024-07-22 20:38:25.730921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:13.973 [2024-07-22 20:38:25.959192] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.973 [2024-07-22 20:38:25.959268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.973 [2024-07-22 20:38:25.959283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.973 [2024-07-22 20:38:25.959294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.973 [2024-07-22 20:38:25.959306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.973 [2024-07-22 20:38:25.959480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.973 [2024-07-22 20:38:25.959619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.973 [2024-07-22 20:38:25.959714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.973 [2024-07-22 20:38:25.959746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:14.544 [2024-07-22 20:38:26.373681] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.544 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:14.544 Malloc1 00:31:14.544 [2024-07-22 20:38:26.514286] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.805 Malloc2 00:31:14.805 Malloc3 00:31:14.805 Malloc4 00:31:14.805 Malloc5 00:31:15.066 Malloc6 00:31:15.066 Malloc7 00:31:15.066 Malloc8 00:31:15.327 Malloc9 00:31:15.327 Malloc10 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3764294 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3764294 /var/tmp/bdevperf.sock 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3764294 ']' 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:15.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:15.327 { 00:31:15.327 "params": { 00:31:15.327 "name": "Nvme$subsystem", 00:31:15.327 "trtype": "$TEST_TRANSPORT", 00:31:15.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.327 "adrfam": "ipv4", 00:31:15.327 "trsvcid": "$NVMF_PORT", 00:31:15.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.327 "hdgst": ${hdgst:-false}, 00:31:15.327 "ddgst": ${ddgst:-false} 00:31:15.327 }, 00:31:15.327 "method": "bdev_nvme_attach_controller" 00:31:15.327 } 00:31:15.327 EOF 00:31:15.327 )") 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:15.327 { 00:31:15.327 "params": { 00:31:15.327 "name": "Nvme$subsystem", 00:31:15.327 "trtype": "$TEST_TRANSPORT", 00:31:15.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.327 "adrfam": "ipv4", 00:31:15.327 "trsvcid": "$NVMF_PORT", 00:31:15.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.327 "hdgst": ${hdgst:-false}, 00:31:15.327 "ddgst": ${ddgst:-false} 00:31:15.327 }, 00:31:15.327 "method": "bdev_nvme_attach_controller" 00:31:15.327 } 00:31:15.327 EOF 00:31:15.327 )") 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:15.327 { 00:31:15.327 "params": { 00:31:15.327 "name": "Nvme$subsystem", 00:31:15.327 "trtype": "$TEST_TRANSPORT", 00:31:15.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.327 "adrfam": "ipv4", 00:31:15.327 "trsvcid": "$NVMF_PORT", 00:31:15.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.327 "hdgst": ${hdgst:-false}, 00:31:15.327 "ddgst": ${ddgst:-false} 00:31:15.327 }, 00:31:15.327 "method": "bdev_nvme_attach_controller" 00:31:15.327 } 00:31:15.327 EOF 00:31:15.327 )") 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:15.327 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:15.328 { 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme$subsystem", 00:31:15.328 "trtype": "$TEST_TRANSPORT", 00:31:15.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "$NVMF_PORT", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.328 "hdgst": ${hdgst:-false}, 00:31:15.328 "ddgst": ${ddgst:-false} 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 } 00:31:15.328 EOF 00:31:15.328 )") 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:15.328 { 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme$subsystem", 00:31:15.328 "trtype": "$TEST_TRANSPORT", 00:31:15.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "$NVMF_PORT", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.328 "hdgst": ${hdgst:-false}, 00:31:15.328 "ddgst": ${ddgst:-false} 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 } 00:31:15.328 EOF 00:31:15.328 )") 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:15.328 { 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme$subsystem", 00:31:15.328 "trtype": "$TEST_TRANSPORT", 00:31:15.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "$NVMF_PORT", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.328 "hdgst": ${hdgst:-false}, 00:31:15.328 "ddgst": ${ddgst:-false} 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 } 00:31:15.328 EOF 00:31:15.328 )") 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:15.328 { 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme$subsystem", 00:31:15.328 "trtype": "$TEST_TRANSPORT", 00:31:15.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "$NVMF_PORT", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.328 "hdgst": ${hdgst:-false}, 00:31:15.328 "ddgst": ${ddgst:-false} 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 } 00:31:15.328 EOF 00:31:15.328 )") 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:15.328 { 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme$subsystem", 00:31:15.328 "trtype": "$TEST_TRANSPORT", 00:31:15.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "$NVMF_PORT", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.328 "hdgst": ${hdgst:-false}, 00:31:15.328 "ddgst": ${ddgst:-false} 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 } 00:31:15.328 EOF 00:31:15.328 )") 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:15.328 { 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme$subsystem", 00:31:15.328 "trtype": "$TEST_TRANSPORT", 00:31:15.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "$NVMF_PORT", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.328 "hdgst": ${hdgst:-false}, 00:31:15.328 "ddgst": ${ddgst:-false} 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 } 00:31:15.328 EOF 00:31:15.328 )") 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:15.328 { 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme$subsystem", 00:31:15.328 "trtype": "$TEST_TRANSPORT", 00:31:15.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "$NVMF_PORT", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.328 "hdgst": ${hdgst:-false}, 00:31:15.328 "ddgst": ${ddgst:-false} 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 } 00:31:15.328 EOF 00:31:15.328 )") 00:31:15.328 [2024-07-22 20:38:27.329154] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:15.328 [2024-07-22 20:38:27.329262] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:31:15.328 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme1", 00:31:15.328 "trtype": "tcp", 00:31:15.328 "traddr": "10.0.0.2", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "4420", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:15.328 "hdgst": false, 00:31:15.328 "ddgst": false 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 },{ 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme2", 00:31:15.328 "trtype": "tcp", 00:31:15.328 "traddr": "10.0.0.2", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "4420", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:15.328 "hdgst": false, 00:31:15.328 "ddgst": false 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 },{ 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme3", 00:31:15.328 "trtype": "tcp", 00:31:15.328 "traddr": "10.0.0.2", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "4420", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:15.328 "hdgst": false, 00:31:15.328 "ddgst": false 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 },{ 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme4", 00:31:15.328 "trtype": "tcp", 00:31:15.328 "traddr": "10.0.0.2", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "4420", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:15.328 "hdgst": false, 00:31:15.328 "ddgst": false 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 },{ 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme5", 00:31:15.328 "trtype": "tcp", 00:31:15.328 "traddr": "10.0.0.2", 00:31:15.328 "adrfam": "ipv4", 00:31:15.328 "trsvcid": "4420", 00:31:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:15.328 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:15.328 "hdgst": false, 00:31:15.328 "ddgst": false 00:31:15.328 }, 00:31:15.328 "method": "bdev_nvme_attach_controller" 00:31:15.328 },{ 00:31:15.328 "params": { 00:31:15.328 "name": "Nvme6", 00:31:15.329 "trtype": "tcp", 00:31:15.329 "traddr": "10.0.0.2", 00:31:15.329 "adrfam": "ipv4", 00:31:15.329 "trsvcid": "4420", 00:31:15.329 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:15.329 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:15.329 "hdgst": false, 00:31:15.329 "ddgst": false 00:31:15.329 }, 00:31:15.329 "method": "bdev_nvme_attach_controller" 00:31:15.329 },{ 00:31:15.329 "params": { 00:31:15.329 "name": "Nvme7", 00:31:15.329 "trtype": "tcp", 00:31:15.329 "traddr": "10.0.0.2", 00:31:15.329 "adrfam": "ipv4", 00:31:15.329 "trsvcid": "4420", 00:31:15.329 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:15.329 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:15.329 "hdgst": false, 00:31:15.329 "ddgst": false 00:31:15.329 }, 00:31:15.329 "method": "bdev_nvme_attach_controller" 00:31:15.329 },{ 00:31:15.329 "params": { 00:31:15.329 "name": "Nvme8", 00:31:15.329 "trtype": "tcp", 00:31:15.329 "traddr": "10.0.0.2", 00:31:15.329 "adrfam": "ipv4", 00:31:15.329 "trsvcid": "4420", 00:31:15.329 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:15.329 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:15.329 "hdgst": false, 00:31:15.329 "ddgst": false 00:31:15.329 }, 00:31:15.329 "method": "bdev_nvme_attach_controller" 00:31:15.329 },{ 00:31:15.329 "params": { 00:31:15.329 "name": "Nvme9", 00:31:15.329 "trtype": "tcp", 00:31:15.329 "traddr": "10.0.0.2", 00:31:15.329 "adrfam": "ipv4", 00:31:15.329 "trsvcid": "4420", 00:31:15.329 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:15.329 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:15.329 "hdgst": false, 00:31:15.329 "ddgst": false 00:31:15.329 }, 00:31:15.329 "method": "bdev_nvme_attach_controller" 00:31:15.329 },{ 00:31:15.329 "params": { 00:31:15.329 "name": "Nvme10", 00:31:15.329 "trtype": "tcp", 00:31:15.329 "traddr": "10.0.0.2", 00:31:15.329 "adrfam": "ipv4", 00:31:15.329 "trsvcid": "4420", 00:31:15.329 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:15.329 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:15.329 "hdgst": false, 00:31:15.329 "ddgst": false 00:31:15.329 }, 00:31:15.329 "method": "bdev_nvme_attach_controller" 00:31:15.329 }' 00:31:15.590 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.590 [2024-07-22 20:38:27.442290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.850 [2024-07-22 20:38:27.621930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.763 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:17.763 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:31:17.763 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:17.763 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.763 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:17.763 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.763 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3764294 00:31:17.763 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:31:17.763 20:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:31:19.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3764294 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3763914 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.149 { 00:31:19.149 "params": { 00:31:19.149 "name": "Nvme$subsystem", 00:31:19.149 "trtype": "$TEST_TRANSPORT", 00:31:19.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.149 "adrfam": "ipv4", 00:31:19.149 "trsvcid": "$NVMF_PORT", 00:31:19.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.149 "hdgst": ${hdgst:-false}, 00:31:19.149 "ddgst": ${ddgst:-false} 00:31:19.149 }, 00:31:19.149 "method": "bdev_nvme_attach_controller" 00:31:19.149 } 00:31:19.149 EOF 00:31:19.149 )") 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.149 { 00:31:19.149 "params": { 00:31:19.149 "name": "Nvme$subsystem", 00:31:19.149 "trtype": "$TEST_TRANSPORT", 00:31:19.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.149 "adrfam": "ipv4", 00:31:19.149 "trsvcid": "$NVMF_PORT", 00:31:19.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.149 "hdgst": ${hdgst:-false}, 00:31:19.149 "ddgst": ${ddgst:-false} 00:31:19.149 }, 00:31:19.149 "method": "bdev_nvme_attach_controller" 00:31:19.149 } 00:31:19.149 EOF 00:31:19.149 )") 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.149 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.149 { 00:31:19.149 "params": { 00:31:19.149 "name": "Nvme$subsystem", 00:31:19.149 "trtype": "$TEST_TRANSPORT", 00:31:19.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.149 "adrfam": "ipv4", 00:31:19.149 "trsvcid": "$NVMF_PORT", 00:31:19.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.150 "hdgst": ${hdgst:-false}, 00:31:19.150 "ddgst": ${ddgst:-false} 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 } 00:31:19.150 EOF 00:31:19.150 )") 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.150 { 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme$subsystem", 00:31:19.150 "trtype": "$TEST_TRANSPORT", 00:31:19.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "$NVMF_PORT", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.150 "hdgst": ${hdgst:-false}, 00:31:19.150 "ddgst": ${ddgst:-false} 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 } 00:31:19.150 EOF 00:31:19.150 )") 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.150 { 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme$subsystem", 00:31:19.150 "trtype": "$TEST_TRANSPORT", 00:31:19.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "$NVMF_PORT", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.150 "hdgst": ${hdgst:-false}, 00:31:19.150 "ddgst": ${ddgst:-false} 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 } 00:31:19.150 EOF 00:31:19.150 )") 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.150 { 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme$subsystem", 00:31:19.150 "trtype": "$TEST_TRANSPORT", 00:31:19.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "$NVMF_PORT", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.150 "hdgst": ${hdgst:-false}, 00:31:19.150 "ddgst": ${ddgst:-false} 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 } 00:31:19.150 EOF 00:31:19.150 )") 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.150 { 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme$subsystem", 00:31:19.150 "trtype": "$TEST_TRANSPORT", 00:31:19.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "$NVMF_PORT", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.150 "hdgst": ${hdgst:-false}, 00:31:19.150 "ddgst": ${ddgst:-false} 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 } 00:31:19.150 EOF 00:31:19.150 )") 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.150 { 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme$subsystem", 00:31:19.150 "trtype": "$TEST_TRANSPORT", 00:31:19.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "$NVMF_PORT", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.150 "hdgst": ${hdgst:-false}, 00:31:19.150 "ddgst": ${ddgst:-false} 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 } 00:31:19.150 EOF 00:31:19.150 )") 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.150 { 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme$subsystem", 00:31:19.150 "trtype": "$TEST_TRANSPORT", 00:31:19.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "$NVMF_PORT", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.150 "hdgst": ${hdgst:-false}, 00:31:19.150 "ddgst": ${ddgst:-false} 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 } 00:31:19.150 EOF 00:31:19.150 )") 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.150 { 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme$subsystem", 00:31:19.150 "trtype": "$TEST_TRANSPORT", 00:31:19.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "$NVMF_PORT", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.150 "hdgst": ${hdgst:-false}, 00:31:19.150 "ddgst": ${ddgst:-false} 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 } 00:31:19.150 EOF 00:31:19.150 )") 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:31:19.150 [2024-07-22 20:38:30.851445] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:31:19.150 [2024-07-22 20:38:30.851558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3764996 ] 00:31:19.150 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme1", 00:31:19.150 "trtype": "tcp", 00:31:19.150 "traddr": "10.0.0.2", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "4420", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:19.150 "hdgst": false, 00:31:19.150 "ddgst": false 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 },{ 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme2", 00:31:19.150 "trtype": "tcp", 00:31:19.150 "traddr": "10.0.0.2", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "4420", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:19.150 "hdgst": false, 00:31:19.150 "ddgst": false 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 },{ 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme3", 00:31:19.150 "trtype": "tcp", 00:31:19.150 "traddr": "10.0.0.2", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "4420", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:19.150 "hdgst": false, 00:31:19.150 "ddgst": false 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 },{ 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme4", 00:31:19.150 "trtype": "tcp", 00:31:19.150 "traddr": "10.0.0.2", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "4420", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:19.150 "hdgst": false, 00:31:19.150 "ddgst": false 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 },{ 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme5", 00:31:19.150 "trtype": "tcp", 00:31:19.150 "traddr": "10.0.0.2", 00:31:19.150 "adrfam": "ipv4", 00:31:19.150 "trsvcid": "4420", 00:31:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:19.150 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:19.150 "hdgst": false, 00:31:19.150 "ddgst": false 00:31:19.150 }, 00:31:19.150 "method": "bdev_nvme_attach_controller" 00:31:19.150 },{ 00:31:19.150 "params": { 00:31:19.150 "name": "Nvme6", 00:31:19.151 "trtype": "tcp", 00:31:19.151 "traddr": "10.0.0.2", 00:31:19.151 "adrfam": "ipv4", 00:31:19.151 "trsvcid": "4420", 00:31:19.151 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:19.151 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:19.151 "hdgst": false, 00:31:19.151 "ddgst": false 00:31:19.151 }, 00:31:19.151 "method": "bdev_nvme_attach_controller" 00:31:19.151 },{ 00:31:19.151 "params": { 00:31:19.151 "name": "Nvme7", 00:31:19.151 "trtype": "tcp", 00:31:19.151 "traddr": "10.0.0.2", 00:31:19.151 "adrfam": "ipv4", 00:31:19.151 "trsvcid": "4420", 00:31:19.151 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:19.151 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:19.151 "hdgst": false, 00:31:19.151 "ddgst": false 00:31:19.151 }, 00:31:19.151 "method": "bdev_nvme_attach_controller" 00:31:19.151 },{ 00:31:19.151 "params": { 00:31:19.151 "name": "Nvme8", 00:31:19.151 "trtype": "tcp", 00:31:19.151 "traddr": "10.0.0.2", 00:31:19.151 "adrfam": "ipv4", 00:31:19.151 "trsvcid": "4420", 00:31:19.151 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:19.151 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:19.151 "hdgst": false, 00:31:19.151 "ddgst": false 00:31:19.151 }, 00:31:19.151 "method": "bdev_nvme_attach_controller" 00:31:19.151 },{ 00:31:19.151 "params": { 00:31:19.151 "name": "Nvme9", 00:31:19.151 "trtype": "tcp", 00:31:19.151 "traddr": "10.0.0.2", 00:31:19.151 "adrfam": "ipv4", 00:31:19.151 "trsvcid": "4420", 00:31:19.151 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:19.151 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:19.151 "hdgst": false, 00:31:19.151 "ddgst": false 00:31:19.151 }, 00:31:19.151 "method": "bdev_nvme_attach_controller" 00:31:19.151 },{ 00:31:19.151 "params": { 00:31:19.151 "name": "Nvme10", 00:31:19.151 "trtype": "tcp", 00:31:19.151 "traddr": "10.0.0.2", 00:31:19.151 "adrfam": "ipv4", 00:31:19.151 "trsvcid": "4420", 00:31:19.151 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:19.151 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:19.151 "hdgst": false, 00:31:19.151 "ddgst": false 00:31:19.151 }, 00:31:19.151 "method": "bdev_nvme_attach_controller" 00:31:19.151 }' 00:31:19.151 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.151 [2024-07-22 20:38:30.961604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.151 [2024-07-22 20:38:31.139181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.066 Running I/O for 1 seconds... 00:31:22.008 00:31:22.008 Latency(us) 00:31:22.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.008 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.008 Verification LBA range: start 0x0 length 0x400 00:31:22.008 Nvme1n1 : 1.16 221.59 13.85 0.00 0.00 285573.33 23811.41 284863.15 00:31:22.008 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.008 Verification LBA range: start 0x0 length 0x400 00:31:22.008 Nvme2n1 : 1.17 218.35 13.65 0.00 0.00 284944.21 23374.51 270882.13 00:31:22.008 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.008 Verification LBA range: start 0x0 length 0x400 00:31:22.008 Nvme3n1 : 1.14 228.19 14.26 0.00 0.00 265630.85 5051.73 270882.13 00:31:22.008 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.008 Verification LBA range: start 0x0 length 0x400 00:31:22.008 Nvme4n1 : 1.14 224.05 14.00 0.00 0.00 266719.15 20534.61 269134.51 00:31:22.008 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.008 Verification LBA range: start 0x0 length 0x400 00:31:22.008 Nvme5n1 : 1.18 216.97 13.56 0.00 0.00 271531.52 21517.65 274377.39 00:31:22.008 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.008 Verification LBA range: start 0x0 length 0x400 00:31:22.008 Nvme6n1 : 1.15 225.75 14.11 0.00 0.00 255046.05 4669.44 223696.21 00:31:22.008 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.008 Verification LBA range: start 0x0 length 0x400 00:31:22.008 Nvme7n1 : 1.16 220.37 13.77 0.00 0.00 257201.71 23265.28 277872.64 00:31:22.008 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.008 Verification LBA range: start 0x0 length 0x400 00:31:22.008 Nvme8n1 : 1.17 219.53 13.72 0.00 0.00 253277.87 21189.97 270882.13 00:31:22.008 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.008 Verification LBA range: start 0x0 length 0x400 00:31:22.008 Nvme9n1 : 1.18 216.79 13.55 0.00 0.00 251602.56 19770.03 272629.76 00:31:22.008 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.008 Verification LBA range: start 0x0 length 0x400 00:31:22.008 Nvme10n1 : 1.19 215.36 13.46 0.00 0.00 248954.03 19333.12 298844.16 00:31:22.008 =================================================================================================================== 00:31:22.009 Total : 2206.94 137.93 0.00 0.00 264036.57 4669.44 298844.16 00:31:22.608 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:31:22.608 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:22.608 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:22.608 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:22.608 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:22.608 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:22.608 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:31:22.608 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:22.608 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:31:22.608 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:22.608 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:22.608 rmmod nvme_tcp 00:31:22.868 rmmod nvme_fabrics 00:31:22.868 rmmod nvme_keyring 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3763914 ']' 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3763914 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3763914 ']' 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3763914 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3763914 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3763914' 00:31:22.868 killing process with pid 3763914 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3763914 00:31:22.868 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3763914 00:31:24.253 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:24.253 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:24.253 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:24.253 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:24.253 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:24.253 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.253 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.253 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:26.803 00:31:26.803 real 0m19.976s 00:31:26.803 user 0m48.240s 00:31:26.803 sys 0m6.890s 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:26.803 ************************************ 00:31:26.803 END TEST nvmf_shutdown_tc1 00:31:26.803 ************************************ 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:26.803 ************************************ 00:31:26.803 START TEST nvmf_shutdown_tc2 00:31:26.803 ************************************ 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:26.803 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:26.804 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:26.804 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:26.804 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:26.804 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.804 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:26.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:31:26.805 00:31:26.805 --- 10.0.0.2 ping statistics --- 00:31:26.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.805 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.436 ms 00:31:26.805 00:31:26.805 --- 10.0.0.1 ping statistics --- 00:31:26.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.805 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3766461 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3766461 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3766461 ']' 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:26.805 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.067 [2024-07-22 20:38:38.856091] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:27.067 [2024-07-22 20:38:38.856225] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.067 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.067 [2024-07-22 20:38:39.000021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:27.328 [2024-07-22 20:38:39.148327] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.328 [2024-07-22 20:38:39.148365] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.328 [2024-07-22 20:38:39.148375] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.328 [2024-07-22 20:38:39.148382] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.328 [2024-07-22 20:38:39.148390] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.328 [2024-07-22 20:38:39.148515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:27.328 [2024-07-22 20:38:39.148667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:27.328 [2024-07-22 20:38:39.148754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.328 [2024-07-22 20:38:39.148781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:27.590 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:27.590 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:31:27.590 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:27.590 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:27.590 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.851 [2024-07-22 20:38:39.628822] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:27.851 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:27.852 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:27.852 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:27.852 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:27.852 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:27.852 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:27.852 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:27.852 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:27.852 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:27.852 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:27.852 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.852 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.852 Malloc1 00:31:27.852 [2024-07-22 20:38:39.757072] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.852 Malloc2 00:31:27.852 Malloc3 00:31:28.113 Malloc4 00:31:28.113 Malloc5 00:31:28.113 Malloc6 00:31:28.113 Malloc7 00:31:28.374 Malloc8 00:31:28.374 Malloc9 00:31:28.374 Malloc10 00:31:28.374 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.374 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:28.374 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:28.374 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:28.636 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3766828 00:31:28.636 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3766828 /var/tmp/bdevperf.sock 00:31:28.636 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3766828 ']' 00:31:28.636 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:28.636 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:28.636 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:28.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:28.636 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:28.636 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:28.636 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:28.636 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:28.637 { 00:31:28.637 "params": { 00:31:28.637 "name": "Nvme$subsystem", 00:31:28.637 "trtype": "$TEST_TRANSPORT", 00:31:28.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.637 "adrfam": "ipv4", 00:31:28.637 "trsvcid": "$NVMF_PORT", 00:31:28.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.637 "hdgst": ${hdgst:-false}, 00:31:28.637 "ddgst": ${ddgst:-false} 00:31:28.637 }, 00:31:28.637 "method": "bdev_nvme_attach_controller" 00:31:28.637 } 00:31:28.637 EOF 00:31:28.637 )") 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:28.637 { 00:31:28.637 "params": { 00:31:28.637 "name": "Nvme$subsystem", 00:31:28.637 "trtype": "$TEST_TRANSPORT", 00:31:28.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.637 "adrfam": "ipv4", 00:31:28.637 "trsvcid": "$NVMF_PORT", 00:31:28.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.637 "hdgst": ${hdgst:-false}, 00:31:28.637 "ddgst": ${ddgst:-false} 00:31:28.637 }, 00:31:28.637 "method": "bdev_nvme_attach_controller" 00:31:28.637 } 00:31:28.637 EOF 00:31:28.637 )") 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:28.637 { 00:31:28.637 "params": { 00:31:28.637 "name": "Nvme$subsystem", 00:31:28.637 "trtype": "$TEST_TRANSPORT", 00:31:28.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.637 "adrfam": "ipv4", 00:31:28.637 "trsvcid": "$NVMF_PORT", 00:31:28.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.637 "hdgst": ${hdgst:-false}, 00:31:28.637 "ddgst": ${ddgst:-false} 00:31:28.637 }, 00:31:28.637 "method": "bdev_nvme_attach_controller" 00:31:28.637 } 00:31:28.637 EOF 00:31:28.637 )") 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:28.637 { 00:31:28.637 "params": { 00:31:28.637 "name": "Nvme$subsystem", 00:31:28.637 "trtype": "$TEST_TRANSPORT", 00:31:28.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.637 "adrfam": "ipv4", 00:31:28.637 "trsvcid": "$NVMF_PORT", 00:31:28.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.637 "hdgst": ${hdgst:-false}, 00:31:28.637 "ddgst": ${ddgst:-false} 00:31:28.637 }, 00:31:28.637 "method": "bdev_nvme_attach_controller" 00:31:28.637 } 00:31:28.637 EOF 00:31:28.637 )") 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:28.637 { 00:31:28.637 "params": { 00:31:28.637 "name": "Nvme$subsystem", 00:31:28.637 "trtype": "$TEST_TRANSPORT", 00:31:28.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.637 "adrfam": "ipv4", 00:31:28.637 "trsvcid": "$NVMF_PORT", 00:31:28.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.637 "hdgst": ${hdgst:-false}, 00:31:28.637 "ddgst": ${ddgst:-false} 00:31:28.637 }, 00:31:28.637 "method": "bdev_nvme_attach_controller" 00:31:28.637 } 00:31:28.637 EOF 00:31:28.637 )") 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:28.637 { 00:31:28.637 "params": { 00:31:28.637 "name": "Nvme$subsystem", 00:31:28.637 "trtype": "$TEST_TRANSPORT", 00:31:28.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.637 "adrfam": "ipv4", 00:31:28.637 "trsvcid": "$NVMF_PORT", 00:31:28.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.637 "hdgst": ${hdgst:-false}, 00:31:28.637 "ddgst": ${ddgst:-false} 00:31:28.637 }, 00:31:28.637 "method": "bdev_nvme_attach_controller" 00:31:28.637 } 00:31:28.637 EOF 00:31:28.637 )") 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:28.637 { 00:31:28.637 "params": { 00:31:28.637 "name": "Nvme$subsystem", 00:31:28.637 "trtype": "$TEST_TRANSPORT", 00:31:28.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.637 "adrfam": "ipv4", 00:31:28.637 "trsvcid": "$NVMF_PORT", 00:31:28.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.637 "hdgst": ${hdgst:-false}, 00:31:28.637 "ddgst": ${ddgst:-false} 00:31:28.637 }, 00:31:28.637 "method": "bdev_nvme_attach_controller" 00:31:28.637 } 00:31:28.637 EOF 00:31:28.637 )") 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:28.637 { 00:31:28.637 "params": { 00:31:28.637 "name": "Nvme$subsystem", 00:31:28.637 "trtype": "$TEST_TRANSPORT", 00:31:28.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.637 "adrfam": "ipv4", 00:31:28.637 "trsvcid": "$NVMF_PORT", 00:31:28.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.637 "hdgst": ${hdgst:-false}, 00:31:28.637 "ddgst": ${ddgst:-false} 00:31:28.637 }, 00:31:28.637 "method": "bdev_nvme_attach_controller" 00:31:28.637 } 00:31:28.637 EOF 00:31:28.637 )") 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:28.637 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:28.638 { 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme$subsystem", 00:31:28.638 "trtype": "$TEST_TRANSPORT", 00:31:28.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "$NVMF_PORT", 00:31:28.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.638 "hdgst": ${hdgst:-false}, 00:31:28.638 "ddgst": ${ddgst:-false} 00:31:28.638 }, 00:31:28.638 "method": "bdev_nvme_attach_controller" 00:31:28.638 } 00:31:28.638 EOF 00:31:28.638 )") 00:31:28.638 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:28.638 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:28.638 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:28.638 { 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme$subsystem", 00:31:28.638 "trtype": "$TEST_TRANSPORT", 00:31:28.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "$NVMF_PORT", 00:31:28.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.638 "hdgst": ${hdgst:-false}, 00:31:28.638 "ddgst": ${ddgst:-false} 00:31:28.638 }, 00:31:28.638 "method": "bdev_nvme_attach_controller" 00:31:28.638 } 00:31:28.638 EOF 00:31:28.638 )") 00:31:28.638 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:28.638 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:31:28.638 [2024-07-22 20:38:40.480144] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:28.638 [2024-07-22 20:38:40.480256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3766828 ] 00:31:28.638 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:31:28.638 20:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme1", 00:31:28.638 "trtype": "tcp", 00:31:28.638 "traddr": "10.0.0.2", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "4420", 00:31:28.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.638 "hdgst": false, 00:31:28.638 "ddgst": false 00:31:28.638 }, 00:31:28.638 "method": "bdev_nvme_attach_controller" 00:31:28.638 },{ 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme2", 00:31:28.638 "trtype": "tcp", 00:31:28.638 "traddr": "10.0.0.2", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "4420", 00:31:28.638 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:28.638 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:28.638 "hdgst": false, 00:31:28.638 "ddgst": false 00:31:28.638 }, 00:31:28.638 "method": "bdev_nvme_attach_controller" 00:31:28.638 },{ 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme3", 00:31:28.638 "trtype": "tcp", 00:31:28.638 "traddr": "10.0.0.2", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "4420", 00:31:28.638 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:28.638 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:28.638 "hdgst": false, 00:31:28.638 "ddgst": false 00:31:28.638 }, 00:31:28.638 "method": "bdev_nvme_attach_controller" 00:31:28.638 },{ 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme4", 00:31:28.638 "trtype": "tcp", 00:31:28.638 "traddr": "10.0.0.2", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "4420", 00:31:28.638 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:28.638 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:28.638 "hdgst": false, 00:31:28.638 "ddgst": false 00:31:28.638 }, 00:31:28.638 "method": "bdev_nvme_attach_controller" 00:31:28.638 },{ 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme5", 00:31:28.638 "trtype": "tcp", 00:31:28.638 "traddr": "10.0.0.2", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "4420", 00:31:28.638 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:28.638 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:28.638 "hdgst": false, 00:31:28.638 "ddgst": false 00:31:28.638 }, 00:31:28.638 "method": "bdev_nvme_attach_controller" 00:31:28.638 },{ 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme6", 00:31:28.638 "trtype": "tcp", 00:31:28.638 "traddr": "10.0.0.2", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "4420", 00:31:28.638 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:28.638 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:28.638 "hdgst": false, 00:31:28.638 "ddgst": false 00:31:28.638 }, 00:31:28.638 "method": "bdev_nvme_attach_controller" 00:31:28.638 },{ 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme7", 00:31:28.638 "trtype": "tcp", 00:31:28.638 "traddr": "10.0.0.2", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "4420", 00:31:28.638 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:28.638 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:28.638 "hdgst": false, 00:31:28.638 "ddgst": false 00:31:28.638 }, 00:31:28.638 "method": "bdev_nvme_attach_controller" 00:31:28.638 },{ 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme8", 00:31:28.638 "trtype": "tcp", 00:31:28.638 "traddr": "10.0.0.2", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "4420", 00:31:28.638 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:28.638 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:28.638 "hdgst": false, 00:31:28.638 "ddgst": false 00:31:28.638 }, 00:31:28.638 "method": "bdev_nvme_attach_controller" 00:31:28.638 },{ 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme9", 00:31:28.638 "trtype": "tcp", 00:31:28.638 "traddr": "10.0.0.2", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "4420", 00:31:28.638 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:28.638 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:28.638 "hdgst": false, 00:31:28.638 "ddgst": false 00:31:28.638 }, 00:31:28.638 "method": "bdev_nvme_attach_controller" 00:31:28.638 },{ 00:31:28.638 "params": { 00:31:28.638 "name": "Nvme10", 00:31:28.638 "trtype": "tcp", 00:31:28.638 "traddr": "10.0.0.2", 00:31:28.638 "adrfam": "ipv4", 00:31:28.638 "trsvcid": "4420", 00:31:28.639 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:28.639 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:28.639 "hdgst": false, 00:31:28.639 "ddgst": false 00:31:28.639 }, 00:31:28.639 "method": "bdev_nvme_attach_controller" 00:31:28.639 }' 00:31:28.639 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.639 [2024-07-22 20:38:40.592478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.900 [2024-07-22 20:38:40.771391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.817 Running I/O for 10 seconds... 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=73 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 73 -ge 100 ']' 00:31:31.078 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=137 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 137 -ge 100 ']' 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3766828 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3766828 ']' 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3766828 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3766828 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3766828' 00:31:31.342 killing process with pid 3766828 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3766828 00:31:31.342 20:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3766828 00:31:31.609 Received shutdown signal, test time was about 1.057729 seconds 00:31:31.609 00:31:31.609 Latency(us) 00:31:31.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.609 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:31.609 Verification LBA range: start 0x0 length 0x400 00:31:31.609 Nvme1n1 : 1.02 193.27 12.08 0.00 0.00 325514.50 7427.41 263891.63 00:31:31.609 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:31.609 Verification LBA range: start 0x0 length 0x400 00:31:31.609 Nvme2n1 : 1.01 189.69 11.86 0.00 0.00 326347.09 24139.09 270882.13 00:31:31.609 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:31.609 Verification LBA range: start 0x0 length 0x400 00:31:31.609 Nvme3n1 : 1.04 249.43 15.59 0.00 0.00 242127.90 5488.64 274377.39 00:31:31.609 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:31.609 Verification LBA range: start 0x0 length 0x400 00:31:31.609 Nvme4n1 : 1.03 186.13 11.63 0.00 0.00 318773.19 24685.23 286610.77 00:31:31.609 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:31.609 Verification LBA range: start 0x0 length 0x400 00:31:31.609 Nvme5n1 : 1.05 243.22 15.20 0.00 0.00 239438.93 22282.24 276125.01 00:31:31.609 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:31.609 Verification LBA range: start 0x0 length 0x400 00:31:31.609 Nvme6n1 : 1.05 244.53 15.28 0.00 0.00 232925.87 20425.39 244667.73 00:31:31.609 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:31.609 Verification LBA range: start 0x0 length 0x400 00:31:31.609 Nvme7n1 : 1.06 242.24 15.14 0.00 0.00 230248.96 17148.59 291853.65 00:31:31.609 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:31.609 Verification LBA range: start 0x0 length 0x400 00:31:31.609 Nvme8n1 : 1.02 250.84 15.68 0.00 0.00 215989.97 13762.56 279620.27 00:31:31.609 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:31.609 Verification LBA range: start 0x0 length 0x400 00:31:31.609 Nvme9n1 : 1.04 184.60 11.54 0.00 0.00 287464.11 23920.64 305834.67 00:31:31.609 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:31.609 Verification LBA range: start 0x0 length 0x400 00:31:31.609 Nvme10n1 : 1.03 186.37 11.65 0.00 0.00 277148.44 26323.63 297096.53 00:31:31.609 =================================================================================================================== 00:31:31.609 Total : 2170.32 135.65 0.00 0.00 264371.73 5488.64 305834.67 00:31:32.181 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3766461 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:33.571 rmmod nvme_tcp 00:31:33.571 rmmod nvme_fabrics 00:31:33.571 rmmod nvme_keyring 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3766461 ']' 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3766461 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3766461 ']' 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3766461 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3766461 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3766461' 00:31:33.571 killing process with pid 3766461 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3766461 00:31:33.571 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3766461 00:31:34.958 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:34.958 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:34.958 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:34.958 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:34.958 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:34.958 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.958 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.958 20:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.885 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:36.885 00:31:36.885 real 0m10.509s 00:31:36.885 user 0m33.655s 00:31:36.885 sys 0m1.615s 00:31:36.885 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:36.885 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.885 ************************************ 00:31:36.885 END TEST nvmf_shutdown_tc2 00:31:36.885 ************************************ 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:37.147 ************************************ 00:31:37.147 START TEST nvmf_shutdown_tc3 00:31:37.147 ************************************ 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:37.147 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:37.147 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.147 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:37.148 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:37.148 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.148 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:37.148 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:37.148 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.148 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.148 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.148 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.148 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:37.148 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.409 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.409 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.409 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:37.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:31:37.409 00:31:37.409 --- 10.0.0.2 ping statistics --- 00:31:37.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.409 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:31:37.409 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:31:37.409 00:31:37.409 --- 10.0.0.1 ping statistics --- 00:31:37.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.409 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:31:37.409 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.409 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:31:37.409 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:37.409 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.409 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:37.409 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:37.409 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3768623 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3768623 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3768623 ']' 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:37.410 20:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:37.670 [2024-07-22 20:38:49.446784] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:37.671 [2024-07-22 20:38:49.446912] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.671 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.671 [2024-07-22 20:38:49.595508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:37.931 [2024-07-22 20:38:49.743209] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.932 [2024-07-22 20:38:49.743245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.932 [2024-07-22 20:38:49.743255] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.932 [2024-07-22 20:38:49.743263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.932 [2024-07-22 20:38:49.743270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.932 [2024-07-22 20:38:49.743419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.932 [2024-07-22 20:38:49.743708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:37.932 [2024-07-22 20:38:49.743798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.932 [2024-07-22 20:38:49.743825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:38.193 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:38.193 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:31:38.193 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:38.193 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:38.193 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:38.453 [2024-07-22 20:38:50.220949] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.453 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:38.453 Malloc1 00:31:38.453 [2024-07-22 20:38:50.349155] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.453 Malloc2 00:31:38.453 Malloc3 00:31:38.713 Malloc4 00:31:38.713 Malloc5 00:31:38.713 Malloc6 00:31:38.713 Malloc7 00:31:38.974 Malloc8 00:31:38.974 Malloc9 00:31:38.974 Malloc10 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3769004 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3769004 /var/tmp/bdevperf.sock 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3769004 ']' 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:38.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:38.974 { 00:31:38.974 "params": { 00:31:38.974 "name": "Nvme$subsystem", 00:31:38.974 "trtype": "$TEST_TRANSPORT", 00:31:38.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.974 "adrfam": "ipv4", 00:31:38.974 "trsvcid": "$NVMF_PORT", 00:31:38.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.974 "hdgst": ${hdgst:-false}, 00:31:38.974 "ddgst": ${ddgst:-false} 00:31:38.974 }, 00:31:38.974 "method": "bdev_nvme_attach_controller" 00:31:38.974 } 00:31:38.974 EOF 00:31:38.974 )") 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:38.974 { 00:31:38.974 "params": { 00:31:38.974 "name": "Nvme$subsystem", 00:31:38.974 "trtype": "$TEST_TRANSPORT", 00:31:38.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.974 "adrfam": "ipv4", 00:31:38.974 "trsvcid": "$NVMF_PORT", 00:31:38.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.974 "hdgst": ${hdgst:-false}, 00:31:38.974 "ddgst": ${ddgst:-false} 00:31:38.974 }, 00:31:38.974 "method": "bdev_nvme_attach_controller" 00:31:38.974 } 00:31:38.974 EOF 00:31:38.974 )") 00:31:38.974 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:39.235 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:39.235 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:39.235 { 00:31:39.235 "params": { 00:31:39.235 "name": "Nvme$subsystem", 00:31:39.235 "trtype": "$TEST_TRANSPORT", 00:31:39.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.235 "adrfam": "ipv4", 00:31:39.235 "trsvcid": "$NVMF_PORT", 00:31:39.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.235 "hdgst": ${hdgst:-false}, 00:31:39.235 "ddgst": ${ddgst:-false} 00:31:39.235 }, 00:31:39.235 "method": "bdev_nvme_attach_controller" 00:31:39.235 } 00:31:39.235 EOF 00:31:39.235 )") 00:31:39.235 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:39.235 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:39.236 { 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme$subsystem", 00:31:39.236 "trtype": "$TEST_TRANSPORT", 00:31:39.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "$NVMF_PORT", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.236 "hdgst": ${hdgst:-false}, 00:31:39.236 "ddgst": ${ddgst:-false} 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 } 00:31:39.236 EOF 00:31:39.236 )") 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:39.236 { 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme$subsystem", 00:31:39.236 "trtype": "$TEST_TRANSPORT", 00:31:39.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "$NVMF_PORT", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.236 "hdgst": ${hdgst:-false}, 00:31:39.236 "ddgst": ${ddgst:-false} 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 } 00:31:39.236 EOF 00:31:39.236 )") 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:39.236 { 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme$subsystem", 00:31:39.236 "trtype": "$TEST_TRANSPORT", 00:31:39.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "$NVMF_PORT", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.236 "hdgst": ${hdgst:-false}, 00:31:39.236 "ddgst": ${ddgst:-false} 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 } 00:31:39.236 EOF 00:31:39.236 )") 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:39.236 { 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme$subsystem", 00:31:39.236 "trtype": "$TEST_TRANSPORT", 00:31:39.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "$NVMF_PORT", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.236 "hdgst": ${hdgst:-false}, 00:31:39.236 "ddgst": ${ddgst:-false} 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 } 00:31:39.236 EOF 00:31:39.236 )") 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:39.236 { 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme$subsystem", 00:31:39.236 "trtype": "$TEST_TRANSPORT", 00:31:39.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "$NVMF_PORT", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.236 "hdgst": ${hdgst:-false}, 00:31:39.236 "ddgst": ${ddgst:-false} 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 } 00:31:39.236 EOF 00:31:39.236 )") 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:39.236 { 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme$subsystem", 00:31:39.236 "trtype": "$TEST_TRANSPORT", 00:31:39.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "$NVMF_PORT", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.236 "hdgst": ${hdgst:-false}, 00:31:39.236 "ddgst": ${ddgst:-false} 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 } 00:31:39.236 EOF 00:31:39.236 )") 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:39.236 { 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme$subsystem", 00:31:39.236 "trtype": "$TEST_TRANSPORT", 00:31:39.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "$NVMF_PORT", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.236 "hdgst": ${hdgst:-false}, 00:31:39.236 "ddgst": ${ddgst:-false} 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 } 00:31:39.236 EOF 00:31:39.236 )") 00:31:39.236 [2024-07-22 20:38:51.055935] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:39.236 [2024-07-22 20:38:51.056043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769004 ] 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:31:39.236 20:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme1", 00:31:39.236 "trtype": "tcp", 00:31:39.236 "traddr": "10.0.0.2", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "4420", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:39.236 "hdgst": false, 00:31:39.236 "ddgst": false 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 },{ 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme2", 00:31:39.236 "trtype": "tcp", 00:31:39.236 "traddr": "10.0.0.2", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "4420", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:39.236 "hdgst": false, 00:31:39.236 "ddgst": false 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 },{ 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme3", 00:31:39.236 "trtype": "tcp", 00:31:39.236 "traddr": "10.0.0.2", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "4420", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:39.236 "hdgst": false, 00:31:39.236 "ddgst": false 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 },{ 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme4", 00:31:39.236 "trtype": "tcp", 00:31:39.236 "traddr": "10.0.0.2", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "4420", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:39.236 "hdgst": false, 00:31:39.236 "ddgst": false 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 },{ 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme5", 00:31:39.236 "trtype": "tcp", 00:31:39.236 "traddr": "10.0.0.2", 00:31:39.236 "adrfam": "ipv4", 00:31:39.236 "trsvcid": "4420", 00:31:39.236 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:39.236 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:39.236 "hdgst": false, 00:31:39.236 "ddgst": false 00:31:39.236 }, 00:31:39.236 "method": "bdev_nvme_attach_controller" 00:31:39.236 },{ 00:31:39.236 "params": { 00:31:39.236 "name": "Nvme6", 00:31:39.236 "trtype": "tcp", 00:31:39.236 "traddr": "10.0.0.2", 00:31:39.236 "adrfam": "ipv4", 00:31:39.237 "trsvcid": "4420", 00:31:39.237 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:39.237 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:39.237 "hdgst": false, 00:31:39.237 "ddgst": false 00:31:39.237 }, 00:31:39.237 "method": "bdev_nvme_attach_controller" 00:31:39.237 },{ 00:31:39.237 "params": { 00:31:39.237 "name": "Nvme7", 00:31:39.237 "trtype": "tcp", 00:31:39.237 "traddr": "10.0.0.2", 00:31:39.237 "adrfam": "ipv4", 00:31:39.237 "trsvcid": "4420", 00:31:39.237 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:39.237 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:39.237 "hdgst": false, 00:31:39.237 "ddgst": false 00:31:39.237 }, 00:31:39.237 "method": "bdev_nvme_attach_controller" 00:31:39.237 },{ 00:31:39.237 "params": { 00:31:39.237 "name": "Nvme8", 00:31:39.237 "trtype": "tcp", 00:31:39.237 "traddr": "10.0.0.2", 00:31:39.237 "adrfam": "ipv4", 00:31:39.237 "trsvcid": "4420", 00:31:39.237 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:39.237 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:39.237 "hdgst": false, 00:31:39.237 "ddgst": false 00:31:39.237 }, 00:31:39.237 "method": "bdev_nvme_attach_controller" 00:31:39.237 },{ 00:31:39.237 "params": { 00:31:39.237 "name": "Nvme9", 00:31:39.237 "trtype": "tcp", 00:31:39.237 "traddr": "10.0.0.2", 00:31:39.237 "adrfam": "ipv4", 00:31:39.237 "trsvcid": "4420", 00:31:39.237 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:39.237 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:39.237 "hdgst": false, 00:31:39.237 "ddgst": false 00:31:39.237 }, 00:31:39.237 "method": "bdev_nvme_attach_controller" 00:31:39.237 },{ 00:31:39.237 "params": { 00:31:39.237 "name": "Nvme10", 00:31:39.237 "trtype": "tcp", 00:31:39.237 "traddr": "10.0.0.2", 00:31:39.237 "adrfam": "ipv4", 00:31:39.237 "trsvcid": "4420", 00:31:39.237 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:39.237 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:39.237 "hdgst": false, 00:31:39.237 "ddgst": false 00:31:39.237 }, 00:31:39.237 "method": "bdev_nvme_attach_controller" 00:31:39.237 }' 00:31:39.237 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.237 [2024-07-22 20:38:51.167243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.497 [2024-07-22 20:38:51.345190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.433 Running I/O for 10 seconds... 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:31:41.694 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3768623 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3768623 ']' 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3768623 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3768623 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3768623' 00:31:41.968 killing process with pid 3768623 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3768623 00:31:41.968 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3768623 00:31:41.968 [2024-07-22 20:38:53.919391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.968 [2024-07-22 20:38:53.919751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.919833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.922997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.923003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.923009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.923015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.923022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.923028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.969 [2024-07-22 20:38:53.923035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.923041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.923047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.923053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.923060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.923067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.925920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.925949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.926528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.927961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.930826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.930846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.930852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.930859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.930865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.930871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.970 [2024-07-22 20:38:53.930878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.930988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.931253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.932066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.932083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.932090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.932097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.932104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.932110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.971 [2024-07-22 20:38:53.932117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:31:41.972 [2024-07-22 20:38:53.932893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.972 [2024-07-22 20:38:53.932939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.972 [2024-07-22 20:38:53.932969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.972 [2024-07-22 20:38:53.932982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.972 [2024-07-22 20:38:53.932996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.972 [2024-07-22 20:38:53.933009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.972 [2024-07-22 20:38:53.933022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.972 [2024-07-22 20:38:53.933033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.972 [2024-07-22 20:38:53.933046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.972 [2024-07-22 20:38:53.933057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.972 [2024-07-22 20:38:53.933070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.972 [2024-07-22 20:38:53.933081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.972 [2024-07-22 20:38:53.933105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.972 [2024-07-22 20:38:53.933116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.972 [2024-07-22 20:38:53.933132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.972 [2024-07-22 20:38:53.933143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.972 [2024-07-22 20:38:53.933156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.972 [2024-07-22 20:38:53.933167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.972 [2024-07-22 20:38:53.933179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.972 [2024-07-22 20:38:53.933190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.973 [2024-07-22 20:38:53.933865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.973 [2024-07-22 20:38:53.933875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.933887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.933898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.933911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.933921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.933934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.933945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.933957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.933967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.933980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.933990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.974 [2024-07-22 20:38:53.934454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.934497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.974 [2024-07-22 20:38:53.934710] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000391f00 was disconnected and freed. reset controller. 00:31:41.974 [2024-07-22 20:38:53.935282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.974 [2024-07-22 20:38:53.935317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.935332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.974 [2024-07-22 20:38:53.935343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.935355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.974 [2024-07-22 20:38:53.935366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.935378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.974 [2024-07-22 20:38:53.935389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.935400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038cc80 is same with the state(5) to be set 00:31:41.974 [2024-07-22 20:38:53.935442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.974 [2024-07-22 20:38:53.935461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.935473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.974 [2024-07-22 20:38:53.935484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.935495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.974 [2024-07-22 20:38:53.935506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.935517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.974 [2024-07-22 20:38:53.935527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.935537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038d680 is same with the state(5) to be set 00:31:41.974 [2024-07-22 20:38:53.935570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.974 [2024-07-22 20:38:53.935582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.935594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.974 [2024-07-22 20:38:53.935604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.935616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.974 [2024-07-22 20:38:53.935626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.974 [2024-07-22 20:38:53.935637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038b880 is same with the state(5) to be set 00:31:41.975 [2024-07-22 20:38:53.935688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038c280 is same with the state(5) to be set 00:31:41.975 [2024-07-22 20:38:53.935809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038ae80 is same with the state(5) to be set 00:31:41.975 [2024-07-22 20:38:53.935929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.935985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.935996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:31:41.975 [2024-07-22 20:38:53.936044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389080 is same with the state(5) to be set 00:31:41.975 [2024-07-22 20:38:53.936164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038a480 is same with the state(5) to be set 00:31:41.975 [2024-07-22 20:38:53.936289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038e080 is same with the state(5) to be set 00:31:41.975 [2024-07-22 20:38:53.936410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:41.975 [2024-07-22 20:38:53.936490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389a80 is same with the state(5) to be set 00:31:41.975 [2024-07-22 20:38:53.936653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.975 [2024-07-22 20:38:53.936673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.975 [2024-07-22 20:38:53.936712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.975 [2024-07-22 20:38:53.936739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.975 [2024-07-22 20:38:53.936765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.975 [2024-07-22 20:38:53.936790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.975 [2024-07-22 20:38:53.936813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.975 [2024-07-22 20:38:53.936827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.975 [2024-07-22 20:38:53.936837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.936850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.936861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.936873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.936883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.936896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.936906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.936920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.936932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.936945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.936956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.936969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.936980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.936993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.937549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.937560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.945240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.945277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.945293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.945305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.945318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.945330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.945343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.945354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.945367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.945377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.945389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.945400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.945413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.945423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.945436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.945447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.976 [2024-07-22 20:38:53.945460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.976 [2024-07-22 20:38:53.945470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.945905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.945919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000390b00 is same with the state(5) to be set 00:31:41.977 [2024-07-22 20:38:53.946140] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000390b00 was disconnected and freed. reset controller. 00:31:41.977 [2024-07-22 20:38:53.969369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.969401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.969425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.969437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.969452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.969464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.969478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.969488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.969501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.969511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.969525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.969535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.977 [2024-07-22 20:38:53.969549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.977 [2024-07-22 20:38:53.969559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.969951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.969995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.970006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.970018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.970028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.970042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.970053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.970066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.970076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.970089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.970100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.970112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.978 [2024-07-22 20:38:53.970122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.978 [2024-07-22 20:38:53.970136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.970951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.979 [2024-07-22 20:38:53.970960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.979 [2024-07-22 20:38:53.974562] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000392e00 was disconnected and freed. reset controller. 00:31:41.979 [2024-07-22 20:38:53.974636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:41.979 [2024-07-22 20:38:53.974672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038c280 (9): Bad file descriptor 00:31:41.979 [2024-07-22 20:38:53.974728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038cc80 (9): Bad file descriptor 00:31:41.979 [2024-07-22 20:38:53.974751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d680 (9): Bad file descriptor 00:31:41.979 [2024-07-22 20:38:53.974769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038b880 (9): Bad file descriptor 00:31:41.979 [2024-07-22 20:38:53.974793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038ae80 (9): Bad file descriptor 00:31:41.979 [2024-07-22 20:38:53.974816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:31:41.979 [2024-07-22 20:38:53.974835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000389080 (9): Bad file descriptor 00:31:41.979 [2024-07-22 20:38:53.974855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038a480 (9): Bad file descriptor 00:31:41.980 [2024-07-22 20:38:53.974876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038e080 (9): Bad file descriptor 00:31:41.980 [2024-07-22 20:38:53.974893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000389a80 (9): Bad file descriptor 00:31:42.272 [2024-07-22 20:38:53.976311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.272 [2024-07-22 20:38:53.976371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.272 [2024-07-22 20:38:53.976400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.272 [2024-07-22 20:38:53.976427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.272 [2024-07-22 20:38:53.976452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.272 [2024-07-22 20:38:53.976475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.272 [2024-07-22 20:38:53.976498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.272 [2024-07-22 20:38:53.976521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.272 [2024-07-22 20:38:53.976544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.272 [2024-07-22 20:38:53.976568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.272 [2024-07-22 20:38:53.976592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.272 [2024-07-22 20:38:53.976615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.272 [2024-07-22 20:38:53.976626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.976981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.976991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.273 [2024-07-22 20:38:53.977584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.273 [2024-07-22 20:38:53.977597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.977886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.977898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000391000 is same with the state(5) to be set 00:31:42.274 [2024-07-22 20:38:53.978100] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000391000 was disconnected and freed. reset controller. 00:31:42.274 [2024-07-22 20:38:53.981174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:31:42.274 [2024-07-22 20:38:53.982781] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000391500 was disconnected and freed. reset controller. 00:31:42.274 [2024-07-22 20:38:53.983841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.274 [2024-07-22 20:38:53.983871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038c280 with addr=10.0.0.2, port=4420 00:31:42.274 [2024-07-22 20:38:53.983885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038c280 is same with the state(5) to be set 00:31:42.274 [2024-07-22 20:38:53.984420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.274 [2024-07-22 20:38:53.984467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000389a80 with addr=10.0.0.2, port=4420 00:31:42.274 [2024-07-22 20:38:53.984482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389a80 is same with the state(5) to be set 00:31:42.274 [2024-07-22 20:38:53.985527] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:42.274 [2024-07-22 20:38:53.985616] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:42.274 [2024-07-22 20:38:53.985663] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:42.274 [2024-07-22 20:38:53.986099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:42.274 [2024-07-22 20:38:53.986125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:42.274 [2024-07-22 20:38:53.986170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038c280 (9): Bad file descriptor 00:31:42.274 [2024-07-22 20:38:53.986189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000389a80 (9): Bad file descriptor 00:31:42.274 [2024-07-22 20:38:53.986237] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:42.274 [2024-07-22 20:38:53.986320] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:42.274 [2024-07-22 20:38:53.986369] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:42.274 [2024-07-22 20:38:53.986914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:31:42.274 [2024-07-22 20:38:53.987472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.274 [2024-07-22 20:38:53.987519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038e080 with addr=10.0.0.2, port=4420 00:31:42.274 [2024-07-22 20:38:53.987536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038e080 is same with the state(5) to be set 00:31:42.274 [2024-07-22 20:38:53.987972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.274 [2024-07-22 20:38:53.987990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038a480 with addr=10.0.0.2, port=4420 00:31:42.274 [2024-07-22 20:38:53.988002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038a480 is same with the state(5) to be set 00:31:42.274 [2024-07-22 20:38:53.988013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:42.274 [2024-07-22 20:38:53.988024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:42.274 [2024-07-22 20:38:53.988047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:42.274 [2024-07-22 20:38:53.988073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:31:42.274 [2024-07-22 20:38:53.988084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:31:42.274 [2024-07-22 20:38:53.988094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:31:42.274 [2024-07-22 20:38:53.988152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.988211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.988240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.988265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.988289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.988312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.988336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.988360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.988384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.988408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.988432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.274 [2024-07-22 20:38:53.988458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.274 [2024-07-22 20:38:53.988469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.988982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.988993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.275 [2024-07-22 20:38:53.989410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.275 [2024-07-22 20:38:53.989423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.989715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.989726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000390100 is same with the state(5) to be set 00:31:42.276 [2024-07-22 20:38:53.991258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.276 [2024-07-22 20:38:53.991887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.276 [2024-07-22 20:38:53.991897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.991911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.991922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.991934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.991945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.991957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.991968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.991980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.991991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.277 [2024-07-22 20:38:53.992546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.277 [2024-07-22 20:38:53.992557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.992569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.992580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.992593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.992603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.992616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.992626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.992638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.992649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.992663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.992674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.992687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.992697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.992710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.992721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.992733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.992744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.992756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.992768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.992785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000390600 is same with the state(5) to be set 00:31:42.278 [2024-07-22 20:38:53.994322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.994984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.994995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.995007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.278 [2024-07-22 20:38:53.995017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.278 [2024-07-22 20:38:53.995030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.995843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.995854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000391a00 is same with the state(5) to be set 00:31:42.279 [2024-07-22 20:38:53.997355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.997374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.997390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.997401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.997414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.997425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.997438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.279 [2024-07-22 20:38:53.997449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.279 [2024-07-22 20:38:53.997462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.997985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.997996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.280 [2024-07-22 20:38:53.998397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.280 [2024-07-22 20:38:53.998409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:53.998865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:53.998875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000392400 is same with the state(5) to be set 00:31:42.281 [2024-07-22 20:38:54.000364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:54.000382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:54.000398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:54.000408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:54.000421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:54.000431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:54.000444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:54.000458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:54.000471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:54.000481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:54.000494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:54.000505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:54.000518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:54.000529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.281 [2024-07-22 20:38:54.000541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.281 [2024-07-22 20:38:54.000553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.000986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.000996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.001009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.001020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.001033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.001044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.001057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.001068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.001080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.001090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.001103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.001114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.001126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.001136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.001149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.001159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.001172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.001182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.001194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.001210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.001223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.001234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.283 [2024-07-22 20:38:54.001246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.283 [2024-07-22 20:38:54.001257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.284 [2024-07-22 20:38:54.001872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.284 [2024-07-22 20:38:54.001883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000392900 is same with the state(5) to be set 00:31:42.284 [2024-07-22 20:38:54.005588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.284 [2024-07-22 20:38:54.005612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.284 [2024-07-22 20:38:54.005623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:42.284 [2024-07-22 20:38:54.005638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:42.284 [2024-07-22 20:38:54.005654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:31:42.284 [2024-07-22 20:38:54.006122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.284 [2024-07-22 20:38:54.006142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038ae80 with addr=10.0.0.2, port=4420 00:31:42.284 [2024-07-22 20:38:54.006155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038ae80 is same with the state(5) to be set 00:31:42.284 [2024-07-22 20:38:54.006170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038e080 (9): Bad file descriptor 00:31:42.284 [2024-07-22 20:38:54.006185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038a480 (9): Bad file descriptor 00:31:42.284 [2024-07-22 20:38:54.006236] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:42.284 [2024-07-22 20:38:54.006254] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:42.284 [2024-07-22 20:38:54.006273] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:42.284 [2024-07-22 20:38:54.006288] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:42.284 [2024-07-22 20:38:54.006301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038ae80 (9): Bad file descriptor 00:31:42.284 [2024-07-22 20:38:54.006396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:31:42.284 task offset: 17536 on job bdev=Nvme7n1 fails 00:31:42.284 00:31:42.284 Latency(us) 00:31:42.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.284 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:42.284 Job: Nvme1n1 ended in about 0.90 seconds with error 00:31:42.284 Verification LBA range: start 0x0 length 0x400 00:31:42.284 Nvme1n1 : 0.90 142.94 8.93 71.47 0.00 294870.47 29709.65 263891.63 00:31:42.284 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:42.284 Job: Nvme2n1 ended in about 0.90 seconds with error 00:31:42.284 Verification LBA range: start 0x0 length 0x400 00:31:42.284 Nvme2n1 : 0.90 142.46 8.90 71.23 0.00 289071.22 24357.55 255153.49 00:31:42.284 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:42.284 Job: Nvme3n1 ended in about 0.88 seconds with error 00:31:42.284 Verification LBA range: start 0x0 length 0x400 00:31:42.284 Nvme3n1 : 0.88 218.00 13.62 72.67 0.00 207232.64 19551.57 265639.25 00:31:42.284 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:42.284 Job: Nvme4n1 ended in about 0.89 seconds with error 00:31:42.284 Verification LBA range: start 0x0 length 0x400 00:31:42.284 Nvme4n1 : 0.89 144.30 9.02 72.15 0.00 271871.00 13871.79 312825.17 00:31:42.284 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:42.284 Verification LBA range: start 0x0 length 0x400 00:31:42.284 Nvme5n1 : 0.88 217.14 13.57 0.00 0.00 264074.81 17367.04 276125.01 00:31:42.285 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:42.285 Job: Nvme6n1 ended in about 0.90 seconds with error 00:31:42.285 Verification LBA range: start 0x0 length 0x400 00:31:42.285 Nvme6n1 : 0.90 141.98 8.87 70.99 0.00 263265.28 17476.27 272629.76 00:31:42.285 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:42.285 Job: Nvme7n1 ended in about 0.87 seconds with error 00:31:42.285 Verification LBA range: start 0x0 length 0x400 00:31:42.285 Nvme7n1 : 0.87 146.51 9.16 73.25 0.00 247269.55 24139.09 276125.01 00:31:42.285 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:42.285 Job: Nvme8n1 ended in about 0.90 seconds with error 00:31:42.285 Verification LBA range: start 0x0 length 0x400 00:31:42.285 Nvme8n1 : 0.90 141.50 8.84 70.75 0.00 250825.67 26214.40 249910.61 00:31:42.285 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:42.285 Job: Nvme9n1 ended in about 0.91 seconds with error 00:31:42.285 Verification LBA range: start 0x0 length 0x400 00:31:42.285 Nvme9n1 : 0.91 141.04 8.81 70.52 0.00 245249.71 20862.29 272629.76 00:31:42.285 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:42.285 Job: Nvme10n1 ended in about 0.89 seconds with error 00:31:42.285 Verification LBA range: start 0x0 length 0x400 00:31:42.285 Nvme10n1 : 0.89 144.53 9.03 72.27 0.00 231232.85 23811.41 304087.04 00:31:42.285 =================================================================================================================== 00:31:42.285 Total : 1580.39 98.77 645.30 0.00 254907.17 13871.79 312825.17 00:31:42.285 [2024-07-22 20:38:54.074262] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:42.285 [2024-07-22 20:38:54.074313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:31:42.285 [2024-07-22 20:38:54.074764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.285 [2024-07-22 20:38:54.074788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:31:42.285 [2024-07-22 20:38:54.074802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:31:42.285 [2024-07-22 20:38:54.075171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.285 [2024-07-22 20:38:54.075187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000389080 with addr=10.0.0.2, port=4420 00:31:42.285 [2024-07-22 20:38:54.075198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389080 is same with the state(5) to be set 00:31:42.285 [2024-07-22 20:38:54.075591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.285 [2024-07-22 20:38:54.075606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038b880 with addr=10.0.0.2, port=4420 00:31:42.285 [2024-07-22 20:38:54.075616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038b880 is same with the state(5) to be set 00:31:42.285 [2024-07-22 20:38:54.075628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:42.285 [2024-07-22 20:38:54.075639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:42.285 [2024-07-22 20:38:54.075651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:42.285 [2024-07-22 20:38:54.075671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:42.285 [2024-07-22 20:38:54.075680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:42.285 [2024-07-22 20:38:54.075690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:42.285 [2024-07-22 20:38:54.077868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:31:42.285 [2024-07-22 20:38:54.077900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:42.285 [2024-07-22 20:38:54.077913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.285 [2024-07-22 20:38:54.077925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.285 [2024-07-22 20:38:54.078289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.285 [2024-07-22 20:38:54.078310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038cc80 with addr=10.0.0.2, port=4420 00:31:42.285 [2024-07-22 20:38:54.078321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038cc80 is same with the state(5) to be set 00:31:42.285 [2024-07-22 20:38:54.078704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.285 [2024-07-22 20:38:54.078719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038d680 with addr=10.0.0.2, port=4420 00:31:42.285 [2024-07-22 20:38:54.078729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038d680 is same with the state(5) to be set 00:31:42.285 [2024-07-22 20:38:54.078746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:31:42.285 [2024-07-22 20:38:54.078763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000389080 (9): Bad file descriptor 00:31:42.285 [2024-07-22 20:38:54.078776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038b880 (9): Bad file descriptor 00:31:42.285 [2024-07-22 20:38:54.078787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:31:42.285 [2024-07-22 20:38:54.078796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:31:42.285 [2024-07-22 20:38:54.078806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:31:42.285 [2024-07-22 20:38:54.078863] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:42.285 [2024-07-22 20:38:54.078879] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:42.285 [2024-07-22 20:38:54.078895] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:42.285 [2024-07-22 20:38:54.078909] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:42.285 [2024-07-22 20:38:54.079030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.285 [2024-07-22 20:38:54.079412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.285 [2024-07-22 20:38:54.079430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000389a80 with addr=10.0.0.2, port=4420 00:31:42.285 [2024-07-22 20:38:54.079441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389a80 is same with the state(5) to be set 00:31:42.285 [2024-07-22 20:38:54.079647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.285 [2024-07-22 20:38:54.079662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038c280 with addr=10.0.0.2, port=4420 00:31:42.285 [2024-07-22 20:38:54.079672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038c280 is same with the state(5) to be set 00:31:42.285 [2024-07-22 20:38:54.079685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038cc80 (9): Bad file descriptor 00:31:42.285 [2024-07-22 20:38:54.079698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d680 (9): Bad file descriptor 00:31:42.285 [2024-07-22 20:38:54.079710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:42.285 [2024-07-22 20:38:54.079719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:42.285 [2024-07-22 20:38:54.079729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:42.285 [2024-07-22 20:38:54.079743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:42.285 [2024-07-22 20:38:54.079752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:42.285 [2024-07-22 20:38:54.079762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:42.285 [2024-07-22 20:38:54.079777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:31:42.285 [2024-07-22 20:38:54.079788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:31:42.285 [2024-07-22 20:38:54.079798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:31:42.285 [2024-07-22 20:38:54.079895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:42.285 [2024-07-22 20:38:54.079911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:42.285 [2024-07-22 20:38:54.079924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.285 [2024-07-22 20:38:54.079933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.285 [2024-07-22 20:38:54.079942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.285 [2024-07-22 20:38:54.079967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000389a80 (9): Bad file descriptor 00:31:42.285 [2024-07-22 20:38:54.079980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038c280 (9): Bad file descriptor 00:31:42.285 [2024-07-22 20:38:54.079992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:31:42.285 [2024-07-22 20:38:54.080001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:31:42.285 [2024-07-22 20:38:54.080010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:31:42.285 [2024-07-22 20:38:54.080023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:31:42.285 [2024-07-22 20:38:54.080031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:31:42.285 [2024-07-22 20:38:54.080042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:31:42.285 [2024-07-22 20:38:54.080081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.285 [2024-07-22 20:38:54.080092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.285 [2024-07-22 20:38:54.080446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.285 [2024-07-22 20:38:54.080463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038a480 with addr=10.0.0.2, port=4420 00:31:42.285 [2024-07-22 20:38:54.080473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038a480 is same with the state(5) to be set 00:31:42.285 [2024-07-22 20:38:54.080858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.285 [2024-07-22 20:38:54.080873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038e080 with addr=10.0.0.2, port=4420 00:31:42.285 [2024-07-22 20:38:54.080883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500038e080 is same with the state(5) to be set 00:31:42.285 [2024-07-22 20:38:54.080893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:31:42.286 [2024-07-22 20:38:54.080902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:31:42.286 [2024-07-22 20:38:54.080912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:31:42.286 [2024-07-22 20:38:54.080927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:42.286 [2024-07-22 20:38:54.080936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:42.286 [2024-07-22 20:38:54.080945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:42.286 [2024-07-22 20:38:54.080984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.286 [2024-07-22 20:38:54.080998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.286 [2024-07-22 20:38:54.081011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038a480 (9): Bad file descriptor 00:31:42.286 [2024-07-22 20:38:54.081025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038e080 (9): Bad file descriptor 00:31:42.286 [2024-07-22 20:38:54.081061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:42.286 [2024-07-22 20:38:54.081071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:42.286 [2024-07-22 20:38:54.081081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:42.286 [2024-07-22 20:38:54.081094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:42.286 [2024-07-22 20:38:54.081104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:42.286 [2024-07-22 20:38:54.081113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:42.286 [2024-07-22 20:38:54.081151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.286 [2024-07-22 20:38:54.081161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:43.670 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:31:43.670 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3769004 00:31:44.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3769004) - No such process 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:44.614 rmmod nvme_tcp 00:31:44.614 rmmod nvme_fabrics 00:31:44.614 rmmod nvme_keyring 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.614 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.163 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:47.163 00:31:47.163 real 0m9.623s 00:31:47.163 user 0m25.923s 00:31:47.163 sys 0m1.485s 00:31:47.163 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:47.163 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:47.163 ************************************ 00:31:47.163 END TEST nvmf_shutdown_tc3 00:31:47.163 ************************************ 00:31:47.163 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:31:47.163 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:31:47.163 00:31:47.163 real 0m40.487s 00:31:47.163 user 1m47.966s 00:31:47.163 sys 0m10.243s 00:31:47.163 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:47.163 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:47.163 ************************************ 00:31:47.163 END TEST nvmf_shutdown 00:31:47.163 ************************************ 00:31:47.163 20:38:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:31:47.163 20:38:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:31:47.163 00:31:47.163 real 17m38.177s 00:31:47.163 user 47m7.828s 00:31:47.163 sys 3m55.486s 00:31:47.163 20:38:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:47.163 20:38:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:47.163 ************************************ 00:31:47.163 END TEST nvmf_target_extra 00:31:47.163 ************************************ 00:31:47.163 20:38:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:47.163 20:38:58 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:47.163 20:38:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:47.163 20:38:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:47.163 20:38:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:47.163 ************************************ 00:31:47.163 START TEST nvmf_host 00:31:47.163 ************************************ 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:47.163 * Looking for test storage... 00:31:47.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.163 ************************************ 00:31:47.163 START TEST nvmf_multicontroller 00:31:47.163 ************************************ 00:31:47.163 20:38:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:47.163 * Looking for test storage... 00:31:47.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:47.163 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.163 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:31:47.163 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.163 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.163 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.163 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.163 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:31:47.164 20:38:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.313 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:55.314 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:55.314 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:55.314 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:55.314 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.314 20:39:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:55.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:31:55.314 00:31:55.314 --- 10.0.0.2 ping statistics --- 00:31:55.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.314 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:31:55.314 00:31:55.314 --- 10.0.0.1 ping statistics --- 00:31:55.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.314 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3774487 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3774487 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3774487 ']' 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:55.314 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.315 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:55.315 20:39:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.315 [2024-07-22 20:39:06.449503] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:55.315 [2024-07-22 20:39:06.449625] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.315 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.315 [2024-07-22 20:39:06.603458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:55.315 [2024-07-22 20:39:06.829741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.315 [2024-07-22 20:39:06.829815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.315 [2024-07-22 20:39:06.829830] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.315 [2024-07-22 20:39:06.829841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.315 [2024-07-22 20:39:06.829854] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.315 [2024-07-22 20:39:06.830019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.315 [2024-07-22 20:39:06.830144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.315 [2024-07-22 20:39:06.830175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.315 [2024-07-22 20:39:07.240847] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.315 Malloc0 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.315 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.576 [2024-07-22 20:39:07.349243] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.576 [2024-07-22 20:39:07.361151] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.576 Malloc1 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3774565 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3774565 /var/tmp/bdevperf.sock 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3774565 ']' 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:55.576 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:55.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:55.577 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:55.577 20:39:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.520 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:56.520 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:31:56.520 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:31:56.520 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.520 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.520 NVMe0n1 00:31:56.520 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.520 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:56.520 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:56.520 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.520 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.783 1 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.783 request: 00:31:56.783 { 00:31:56.783 "name": "NVMe0", 00:31:56.783 "trtype": "tcp", 00:31:56.783 "traddr": "10.0.0.2", 00:31:56.783 "adrfam": "ipv4", 00:31:56.783 "trsvcid": "4420", 00:31:56.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.783 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:56.783 "hostaddr": "10.0.0.2", 00:31:56.783 "hostsvcid": "60000", 00:31:56.783 "prchk_reftag": false, 00:31:56.783 "prchk_guard": false, 00:31:56.783 "hdgst": false, 00:31:56.783 "ddgst": false, 00:31:56.783 "method": "bdev_nvme_attach_controller", 00:31:56.783 "req_id": 1 00:31:56.783 } 00:31:56.783 Got JSON-RPC error response 00:31:56.783 response: 00:31:56.783 { 00:31:56.783 "code": -114, 00:31:56.783 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:56.783 } 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.783 request: 00:31:56.783 { 00:31:56.783 "name": "NVMe0", 00:31:56.783 "trtype": "tcp", 00:31:56.783 "traddr": "10.0.0.2", 00:31:56.783 "adrfam": "ipv4", 00:31:56.783 "trsvcid": "4420", 00:31:56.783 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:56.783 "hostaddr": "10.0.0.2", 00:31:56.783 "hostsvcid": "60000", 00:31:56.783 "prchk_reftag": false, 00:31:56.783 "prchk_guard": false, 00:31:56.783 "hdgst": false, 00:31:56.783 "ddgst": false, 00:31:56.783 "method": "bdev_nvme_attach_controller", 00:31:56.783 "req_id": 1 00:31:56.783 } 00:31:56.783 Got JSON-RPC error response 00:31:56.783 response: 00:31:56.783 { 00:31:56.783 "code": -114, 00:31:56.783 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:56.783 } 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:56.783 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.784 request: 00:31:56.784 { 00:31:56.784 "name": "NVMe0", 00:31:56.784 "trtype": "tcp", 00:31:56.784 "traddr": "10.0.0.2", 00:31:56.784 "adrfam": "ipv4", 00:31:56.784 "trsvcid": "4420", 00:31:56.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.784 "hostaddr": "10.0.0.2", 00:31:56.784 "hostsvcid": "60000", 00:31:56.784 "prchk_reftag": false, 00:31:56.784 "prchk_guard": false, 00:31:56.784 "hdgst": false, 00:31:56.784 "ddgst": false, 00:31:56.784 "multipath": "disable", 00:31:56.784 "method": "bdev_nvme_attach_controller", 00:31:56.784 "req_id": 1 00:31:56.784 } 00:31:56.784 Got JSON-RPC error response 00:31:56.784 response: 00:31:56.784 { 00:31:56.784 "code": -114, 00:31:56.784 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:31:56.784 } 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.784 request: 00:31:56.784 { 00:31:56.784 "name": "NVMe0", 00:31:56.784 "trtype": "tcp", 00:31:56.784 "traddr": "10.0.0.2", 00:31:56.784 "adrfam": "ipv4", 00:31:56.784 "trsvcid": "4420", 00:31:56.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.784 "hostaddr": "10.0.0.2", 00:31:56.784 "hostsvcid": "60000", 00:31:56.784 "prchk_reftag": false, 00:31:56.784 "prchk_guard": false, 00:31:56.784 "hdgst": false, 00:31:56.784 "ddgst": false, 00:31:56.784 "multipath": "failover", 00:31:56.784 "method": "bdev_nvme_attach_controller", 00:31:56.784 "req_id": 1 00:31:56.784 } 00:31:56.784 Got JSON-RPC error response 00:31:56.784 response: 00:31:56.784 { 00:31:56.784 "code": -114, 00:31:56.784 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:56.784 } 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.784 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.784 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.048 00:31:57.048 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.048 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:57.048 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:57.048 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.048 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:57.048 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.048 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:57.048 20:39:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:57.990 0 00:31:57.990 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:57.990 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.990 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3774565 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3774565 ']' 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3774565 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3774565 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3774565' 00:31:58.251 killing process with pid 3774565 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3774565 00:31:58.251 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3774565 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:31:58.824 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:58.824 [2024-07-22 20:39:07.546972] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:58.824 [2024-07-22 20:39:07.547086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774565 ] 00:31:58.824 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.824 [2024-07-22 20:39:07.657727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.824 [2024-07-22 20:39:07.835636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.824 [2024-07-22 20:39:08.868018] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 0995c3de-79de-490f-a4a2-cec0539919e7 already exists 00:31:58.824 [2024-07-22 20:39:08.868062] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:0995c3de-79de-490f-a4a2-cec0539919e7 alias for bdev NVMe1n1 00:31:58.824 [2024-07-22 20:39:08.868076] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:58.824 Running I/O for 1 seconds... 00:31:58.824 00:31:58.824 Latency(us) 00:31:58.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.824 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:58.824 NVMe0n1 : 1.01 24705.69 96.51 0.00 0.00 5168.19 4450.99 15073.28 00:31:58.824 =================================================================================================================== 00:31:58.824 Total : 24705.69 96.51 0.00 0.00 5168.19 4450.99 15073.28 00:31:58.824 Received shutdown signal, test time was about 1.000000 seconds 00:31:58.824 00:31:58.824 Latency(us) 00:31:58.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.824 =================================================================================================================== 00:31:58.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:58.824 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:58.824 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:58.824 rmmod nvme_tcp 00:31:59.085 rmmod nvme_fabrics 00:31:59.085 rmmod nvme_keyring 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3774487 ']' 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3774487 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3774487 ']' 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3774487 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3774487 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3774487' 00:31:59.085 killing process with pid 3774487 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3774487 00:31:59.085 20:39:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3774487 00:32:00.026 20:39:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:00.026 20:39:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:00.026 20:39:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:00.026 20:39:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:00.026 20:39:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:00.026 20:39:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.026 20:39:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.026 20:39:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.941 20:39:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:01.941 00:32:01.941 real 0m14.908s 00:32:01.941 user 0m19.963s 00:32:01.941 sys 0m6.439s 00:32:01.941 20:39:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:01.941 20:39:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:01.941 ************************************ 00:32:01.941 END TEST nvmf_multicontroller 00:32:01.941 ************************************ 00:32:01.941 20:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:01.941 20:39:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:01.941 20:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:01.941 20:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.941 20:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.941 ************************************ 00:32:01.941 START TEST nvmf_aer 00:32:01.941 ************************************ 00:32:01.941 20:39:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:02.202 * Looking for test storage... 00:32:02.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.202 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:32:02.203 20:39:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:08.797 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:08.797 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.797 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:08.798 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:08.798 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.798 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.059 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.059 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.059 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:09.059 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.059 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.059 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.059 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:09.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:32:09.059 00:32:09.059 --- 10.0.0.2 ping statistics --- 00:32:09.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.059 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:32:09.059 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:32:09.059 00:32:09.059 --- 10.0.0.1 ping statistics --- 00:32:09.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.059 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:32:09.059 20:39:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3780018 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3780018 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3780018 ']' 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:09.059 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:09.324 [2024-07-22 20:39:21.135536] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:09.324 [2024-07-22 20:39:21.135672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.324 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.324 [2024-07-22 20:39:21.270296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:09.584 [2024-07-22 20:39:21.454180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.584 [2024-07-22 20:39:21.454232] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.584 [2024-07-22 20:39:21.454245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.584 [2024-07-22 20:39:21.454255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.584 [2024-07-22 20:39:21.454266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.584 [2024-07-22 20:39:21.454440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.584 [2024-07-22 20:39:21.454654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.584 [2024-07-22 20:39:21.454695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.584 [2024-07-22 20:39:21.454718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:10.155 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:10.155 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:32:10.155 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:10.155 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.156 [2024-07-22 20:39:21.930921] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.156 Malloc0 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.156 20:39:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.156 [2024-07-22 20:39:22.027499] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.156 [ 00:32:10.156 { 00:32:10.156 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:10.156 "subtype": "Discovery", 00:32:10.156 "listen_addresses": [], 00:32:10.156 "allow_any_host": true, 00:32:10.156 "hosts": [] 00:32:10.156 }, 00:32:10.156 { 00:32:10.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.156 "subtype": "NVMe", 00:32:10.156 "listen_addresses": [ 00:32:10.156 { 00:32:10.156 "trtype": "TCP", 00:32:10.156 "adrfam": "IPv4", 00:32:10.156 "traddr": "10.0.0.2", 00:32:10.156 "trsvcid": "4420" 00:32:10.156 } 00:32:10.156 ], 00:32:10.156 "allow_any_host": true, 00:32:10.156 "hosts": [], 00:32:10.156 "serial_number": "SPDK00000000000001", 00:32:10.156 "model_number": "SPDK bdev Controller", 00:32:10.156 "max_namespaces": 2, 00:32:10.156 "min_cntlid": 1, 00:32:10.156 "max_cntlid": 65519, 00:32:10.156 "namespaces": [ 00:32:10.156 { 00:32:10.156 "nsid": 1, 00:32:10.156 "bdev_name": "Malloc0", 00:32:10.156 "name": "Malloc0", 00:32:10.156 "nguid": "4BFBBAA8A41D4B9E918F9A75681613A3", 00:32:10.156 "uuid": "4bfbbaa8-a41d-4b9e-918f-9a75681613a3" 00:32:10.156 } 00:32:10.156 ] 00:32:10.156 } 00:32:10.156 ] 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3780084 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:32:10.156 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:32:10.156 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.417 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:10.417 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:32:10.417 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:32:10.417 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:32:10.417 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:10.417 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:10.417 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:32:10.417 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:32:10.417 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.417 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.678 Malloc1 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.678 [ 00:32:10.678 { 00:32:10.678 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:10.678 "subtype": "Discovery", 00:32:10.678 "listen_addresses": [], 00:32:10.678 "allow_any_host": true, 00:32:10.678 "hosts": [] 00:32:10.678 }, 00:32:10.678 { 00:32:10.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.678 "subtype": "NVMe", 00:32:10.678 "listen_addresses": [ 00:32:10.678 { 00:32:10.678 "trtype": "TCP", 00:32:10.678 "adrfam": "IPv4", 00:32:10.678 "traddr": "10.0.0.2", 00:32:10.678 "trsvcid": "4420" 00:32:10.678 } 00:32:10.678 ], 00:32:10.678 "allow_any_host": true, 00:32:10.678 "hosts": [], 00:32:10.678 "serial_number": "SPDK00000000000001", 00:32:10.678 "model_number": "SPDK bdev Controller", 00:32:10.678 "max_namespaces": 2, 00:32:10.678 "min_cntlid": 1, 00:32:10.678 "max_cntlid": 65519, 00:32:10.678 "namespaces": [ 00:32:10.678 { 00:32:10.678 "nsid": 1, 00:32:10.678 "bdev_name": "Malloc0", 00:32:10.678 "name": "Malloc0", 00:32:10.678 "nguid": "4BFBBAA8A41D4B9E918F9A75681613A3", 00:32:10.678 "uuid": "4bfbbaa8-a41d-4b9e-918f-9a75681613a3" 00:32:10.678 }, 00:32:10.678 { 00:32:10.678 "nsid": 2, 00:32:10.678 "bdev_name": "Malloc1", 00:32:10.678 "name": "Malloc1", 00:32:10.678 "nguid": "766086F1844E4A57BF5952B6627E7A7C", 00:32:10.678 "uuid": "766086f1-844e-4a57-bf59-52b6627e7a7c" 00:32:10.678 } 00:32:10.678 ] 00:32:10.678 } 00:32:10.678 ] 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3780084 00:32:10.678 Asynchronous Event Request test 00:32:10.678 Attaching to 10.0.0.2 00:32:10.678 Attached to 10.0.0.2 00:32:10.678 Registering asynchronous event callbacks... 00:32:10.678 Starting namespace attribute notice tests for all controllers... 00:32:10.678 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:32:10.678 aer_cb - Changed Namespace 00:32:10.678 Cleaning up... 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.678 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:10.938 rmmod nvme_tcp 00:32:10.938 rmmod nvme_fabrics 00:32:10.938 rmmod nvme_keyring 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3780018 ']' 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3780018 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3780018 ']' 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3780018 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3780018 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3780018' 00:32:10.938 killing process with pid 3780018 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3780018 00:32:10.938 20:39:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3780018 00:32:11.877 20:39:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:11.877 20:39:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:11.877 20:39:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:11.877 20:39:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:11.877 20:39:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:11.877 20:39:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.877 20:39:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.877 20:39:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.423 20:39:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:14.423 00:32:14.423 real 0m11.978s 00:32:14.423 user 0m10.386s 00:32:14.423 sys 0m5.873s 00:32:14.423 20:39:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:14.423 20:39:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:14.423 ************************************ 00:32:14.423 END TEST nvmf_aer 00:32:14.423 ************************************ 00:32:14.423 20:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:14.423 20:39:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:14.423 20:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:14.423 20:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:14.423 20:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.423 ************************************ 00:32:14.423 START TEST nvmf_async_init 00:32:14.423 ************************************ 00:32:14.423 20:39:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:14.423 * Looking for test storage... 00:32:14.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1823f61b3772450082001679dc254db8 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:14.423 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.424 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:14.424 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.424 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:14.424 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:14.424 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:32:14.424 20:39:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:21.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:21.015 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:21.015 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:21.015 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:21.015 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:21.015 20:39:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:21.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:21.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:32:21.275 00:32:21.275 --- 10.0.0.2 ping statistics --- 00:32:21.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.275 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:21.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:21.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:32:21.275 00:32:21.275 --- 10.0.0.1 ping statistics --- 00:32:21.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.275 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3784502 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3784502 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3784502 ']' 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:21.275 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:21.275 [2024-07-22 20:39:33.230476] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:21.275 [2024-07-22 20:39:33.230583] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.535 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.535 [2024-07-22 20:39:33.350779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.535 [2024-07-22 20:39:33.527843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:21.535 [2024-07-22 20:39:33.527888] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:21.535 [2024-07-22 20:39:33.527901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:21.535 [2024-07-22 20:39:33.527910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:21.535 [2024-07-22 20:39:33.527920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:21.535 [2024-07-22 20:39:33.527945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.105 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:22.105 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:32:22.105 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:22.105 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:22.105 20:39:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.105 [2024-07-22 20:39:34.016408] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.105 null0 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1823f61b3772450082001679dc254db8 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.105 [2024-07-22 20:39:34.076694] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.105 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.365 nvme0n1 00:32:22.365 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.366 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:22.366 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.366 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.366 [ 00:32:22.366 { 00:32:22.366 "name": "nvme0n1", 00:32:22.366 "aliases": [ 00:32:22.366 "1823f61b-3772-4500-8200-1679dc254db8" 00:32:22.366 ], 00:32:22.366 "product_name": "NVMe disk", 00:32:22.366 "block_size": 512, 00:32:22.366 "num_blocks": 2097152, 00:32:22.366 "uuid": "1823f61b-3772-4500-8200-1679dc254db8", 00:32:22.366 "assigned_rate_limits": { 00:32:22.366 "rw_ios_per_sec": 0, 00:32:22.366 "rw_mbytes_per_sec": 0, 00:32:22.366 "r_mbytes_per_sec": 0, 00:32:22.366 "w_mbytes_per_sec": 0 00:32:22.366 }, 00:32:22.366 "claimed": false, 00:32:22.366 "zoned": false, 00:32:22.366 "supported_io_types": { 00:32:22.366 "read": true, 00:32:22.366 "write": true, 00:32:22.366 "unmap": false, 00:32:22.366 "flush": true, 00:32:22.366 "reset": true, 00:32:22.366 "nvme_admin": true, 00:32:22.366 "nvme_io": true, 00:32:22.366 "nvme_io_md": false, 00:32:22.366 "write_zeroes": true, 00:32:22.366 "zcopy": false, 00:32:22.366 "get_zone_info": false, 00:32:22.366 "zone_management": false, 00:32:22.366 "zone_append": false, 00:32:22.366 "compare": true, 00:32:22.366 "compare_and_write": true, 00:32:22.366 "abort": true, 00:32:22.366 "seek_hole": false, 00:32:22.366 "seek_data": false, 00:32:22.366 "copy": true, 00:32:22.366 "nvme_iov_md": false 00:32:22.366 }, 00:32:22.366 "memory_domains": [ 00:32:22.366 { 00:32:22.366 "dma_device_id": "system", 00:32:22.366 "dma_device_type": 1 00:32:22.366 } 00:32:22.366 ], 00:32:22.366 "driver_specific": { 00:32:22.366 "nvme": [ 00:32:22.366 { 00:32:22.366 "trid": { 00:32:22.366 "trtype": "TCP", 00:32:22.366 "adrfam": "IPv4", 00:32:22.366 "traddr": "10.0.0.2", 00:32:22.366 "trsvcid": "4420", 00:32:22.366 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:22.366 }, 00:32:22.366 "ctrlr_data": { 00:32:22.366 "cntlid": 1, 00:32:22.366 "vendor_id": "0x8086", 00:32:22.366 "model_number": "SPDK bdev Controller", 00:32:22.366 "serial_number": "00000000000000000000", 00:32:22.366 "firmware_revision": "24.09", 00:32:22.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.366 "oacs": { 00:32:22.366 "security": 0, 00:32:22.366 "format": 0, 00:32:22.366 "firmware": 0, 00:32:22.366 "ns_manage": 0 00:32:22.366 }, 00:32:22.366 "multi_ctrlr": true, 00:32:22.366 "ana_reporting": false 00:32:22.366 }, 00:32:22.366 "vs": { 00:32:22.366 "nvme_version": "1.3" 00:32:22.366 }, 00:32:22.366 "ns_data": { 00:32:22.366 "id": 1, 00:32:22.366 "can_share": true 00:32:22.366 } 00:32:22.366 } 00:32:22.366 ], 00:32:22.366 "mp_policy": "active_passive" 00:32:22.366 } 00:32:22.366 } 00:32:22.366 ] 00:32:22.366 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.366 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:32:22.366 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.366 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.366 [2024-07-22 20:39:34.353286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:22.366 [2024-07-22 20:39:34.353376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388900 (9): Bad file descriptor 00:32:22.626 [2024-07-22 20:39:34.485341] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:22.626 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.626 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:22.626 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.626 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.626 [ 00:32:22.626 { 00:32:22.626 "name": "nvme0n1", 00:32:22.626 "aliases": [ 00:32:22.626 "1823f61b-3772-4500-8200-1679dc254db8" 00:32:22.626 ], 00:32:22.626 "product_name": "NVMe disk", 00:32:22.626 "block_size": 512, 00:32:22.626 "num_blocks": 2097152, 00:32:22.626 "uuid": "1823f61b-3772-4500-8200-1679dc254db8", 00:32:22.626 "assigned_rate_limits": { 00:32:22.626 "rw_ios_per_sec": 0, 00:32:22.626 "rw_mbytes_per_sec": 0, 00:32:22.626 "r_mbytes_per_sec": 0, 00:32:22.626 "w_mbytes_per_sec": 0 00:32:22.626 }, 00:32:22.626 "claimed": false, 00:32:22.626 "zoned": false, 00:32:22.626 "supported_io_types": { 00:32:22.626 "read": true, 00:32:22.626 "write": true, 00:32:22.626 "unmap": false, 00:32:22.626 "flush": true, 00:32:22.626 "reset": true, 00:32:22.626 "nvme_admin": true, 00:32:22.626 "nvme_io": true, 00:32:22.626 "nvme_io_md": false, 00:32:22.626 "write_zeroes": true, 00:32:22.626 "zcopy": false, 00:32:22.626 "get_zone_info": false, 00:32:22.626 "zone_management": false, 00:32:22.626 "zone_append": false, 00:32:22.626 "compare": true, 00:32:22.626 "compare_and_write": true, 00:32:22.626 "abort": true, 00:32:22.626 "seek_hole": false, 00:32:22.626 "seek_data": false, 00:32:22.626 "copy": true, 00:32:22.626 "nvme_iov_md": false 00:32:22.626 }, 00:32:22.626 "memory_domains": [ 00:32:22.626 { 00:32:22.626 "dma_device_id": "system", 00:32:22.626 "dma_device_type": 1 00:32:22.626 } 00:32:22.626 ], 00:32:22.626 "driver_specific": { 00:32:22.626 "nvme": [ 00:32:22.626 { 00:32:22.626 "trid": { 00:32:22.626 "trtype": "TCP", 00:32:22.626 "adrfam": "IPv4", 00:32:22.626 "traddr": "10.0.0.2", 00:32:22.626 "trsvcid": "4420", 00:32:22.626 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:22.626 }, 00:32:22.626 "ctrlr_data": { 00:32:22.626 "cntlid": 2, 00:32:22.626 "vendor_id": "0x8086", 00:32:22.626 "model_number": "SPDK bdev Controller", 00:32:22.626 "serial_number": "00000000000000000000", 00:32:22.626 "firmware_revision": "24.09", 00:32:22.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.627 "oacs": { 00:32:22.627 "security": 0, 00:32:22.627 "format": 0, 00:32:22.627 "firmware": 0, 00:32:22.627 "ns_manage": 0 00:32:22.627 }, 00:32:22.627 "multi_ctrlr": true, 00:32:22.627 "ana_reporting": false 00:32:22.627 }, 00:32:22.627 "vs": { 00:32:22.627 "nvme_version": "1.3" 00:32:22.627 }, 00:32:22.627 "ns_data": { 00:32:22.627 "id": 1, 00:32:22.627 "can_share": true 00:32:22.627 } 00:32:22.627 } 00:32:22.627 ], 00:32:22.627 "mp_policy": "active_passive" 00:32:22.627 } 00:32:22.627 } 00:32:22.627 ] 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rtbDdVVmQe 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rtbDdVVmQe 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.627 [2024-07-22 20:39:34.553944] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:22.627 [2024-07-22 20:39:34.554101] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rtbDdVVmQe 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.627 [2024-07-22 20:39:34.565959] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rtbDdVVmQe 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.627 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.627 [2024-07-22 20:39:34.578011] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:22.627 [2024-07-22 20:39:34.578085] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:32:22.887 nvme0n1 00:32:22.887 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.887 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:22.887 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.887 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.887 [ 00:32:22.887 { 00:32:22.887 "name": "nvme0n1", 00:32:22.887 "aliases": [ 00:32:22.887 "1823f61b-3772-4500-8200-1679dc254db8" 00:32:22.887 ], 00:32:22.887 "product_name": "NVMe disk", 00:32:22.887 "block_size": 512, 00:32:22.887 "num_blocks": 2097152, 00:32:22.887 "uuid": "1823f61b-3772-4500-8200-1679dc254db8", 00:32:22.888 "assigned_rate_limits": { 00:32:22.888 "rw_ios_per_sec": 0, 00:32:22.888 "rw_mbytes_per_sec": 0, 00:32:22.888 "r_mbytes_per_sec": 0, 00:32:22.888 "w_mbytes_per_sec": 0 00:32:22.888 }, 00:32:22.888 "claimed": false, 00:32:22.888 "zoned": false, 00:32:22.888 "supported_io_types": { 00:32:22.888 "read": true, 00:32:22.888 "write": true, 00:32:22.888 "unmap": false, 00:32:22.888 "flush": true, 00:32:22.888 "reset": true, 00:32:22.888 "nvme_admin": true, 00:32:22.888 "nvme_io": true, 00:32:22.888 "nvme_io_md": false, 00:32:22.888 "write_zeroes": true, 00:32:22.888 "zcopy": false, 00:32:22.888 "get_zone_info": false, 00:32:22.888 "zone_management": false, 00:32:22.888 "zone_append": false, 00:32:22.888 "compare": true, 00:32:22.888 "compare_and_write": true, 00:32:22.888 "abort": true, 00:32:22.888 "seek_hole": false, 00:32:22.888 "seek_data": false, 00:32:22.888 "copy": true, 00:32:22.888 "nvme_iov_md": false 00:32:22.888 }, 00:32:22.888 "memory_domains": [ 00:32:22.888 { 00:32:22.888 "dma_device_id": "system", 00:32:22.888 "dma_device_type": 1 00:32:22.888 } 00:32:22.888 ], 00:32:22.888 "driver_specific": { 00:32:22.888 "nvme": [ 00:32:22.888 { 00:32:22.888 "trid": { 00:32:22.888 "trtype": "TCP", 00:32:22.888 "adrfam": "IPv4", 00:32:22.888 "traddr": "10.0.0.2", 00:32:22.888 "trsvcid": "4421", 00:32:22.888 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:22.888 }, 00:32:22.888 "ctrlr_data": { 00:32:22.888 "cntlid": 3, 00:32:22.888 "vendor_id": "0x8086", 00:32:22.888 "model_number": "SPDK bdev Controller", 00:32:22.888 "serial_number": "00000000000000000000", 00:32:22.888 "firmware_revision": "24.09", 00:32:22.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.888 "oacs": { 00:32:22.888 "security": 0, 00:32:22.888 "format": 0, 00:32:22.888 "firmware": 0, 00:32:22.888 "ns_manage": 0 00:32:22.888 }, 00:32:22.888 "multi_ctrlr": true, 00:32:22.888 "ana_reporting": false 00:32:22.888 }, 00:32:22.888 "vs": { 00:32:22.888 "nvme_version": "1.3" 00:32:22.888 }, 00:32:22.888 "ns_data": { 00:32:22.888 "id": 1, 00:32:22.888 "can_share": true 00:32:22.888 } 00:32:22.888 } 00:32:22.888 ], 00:32:22.888 "mp_policy": "active_passive" 00:32:22.888 } 00:32:22.888 } 00:32:22.888 ] 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.rtbDdVVmQe 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:22.888 rmmod nvme_tcp 00:32:22.888 rmmod nvme_fabrics 00:32:22.888 rmmod nvme_keyring 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3784502 ']' 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3784502 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3784502 ']' 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3784502 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3784502 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3784502' 00:32:22.888 killing process with pid 3784502 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3784502 00:32:22.888 [2024-07-22 20:39:34.834433] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:32:22.888 [2024-07-22 20:39:34.834470] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:22.888 20:39:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3784502 00:32:23.829 20:39:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:23.829 20:39:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:23.829 20:39:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:23.829 20:39:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:23.829 20:39:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:23.829 20:39:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.829 20:39:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.829 20:39:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:26.377 00:32:26.377 real 0m11.795s 00:32:26.377 user 0m4.532s 00:32:26.377 sys 0m5.723s 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:26.377 ************************************ 00:32:26.377 END TEST nvmf_async_init 00:32:26.377 ************************************ 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.377 ************************************ 00:32:26.377 START TEST dma 00:32:26.377 ************************************ 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:26.377 * Looking for test storage... 00:32:26.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:32:26.377 00:32:26.377 real 0m0.129s 00:32:26.377 user 0m0.060s 00:32:26.377 sys 0m0.077s 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:26.377 20:39:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:32:26.377 ************************************ 00:32:26.377 END TEST dma 00:32:26.377 ************************************ 00:32:26.377 20:39:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:26.377 20:39:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:26.377 20:39:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:26.377 20:39:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:26.377 20:39:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.377 ************************************ 00:32:26.377 START TEST nvmf_identify 00:32:26.377 ************************************ 00:32:26.377 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:26.377 * Looking for test storage... 00:32:26.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:26.377 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.377 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:32:26.377 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.377 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.377 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:32:26.378 20:39:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:32.969 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.969 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:32.970 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:32.970 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:32.970 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:32.970 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.970 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.971 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:32.971 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.971 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.971 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:32.971 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.971 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.971 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:32.971 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:32.971 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:33.232 20:39:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:33.232 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:33.232 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.232 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:33.232 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.232 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:33.232 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:33.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:32:33.493 00:32:33.493 --- 10.0.0.2 ping statistics --- 00:32:33.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.493 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:33.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:32:33.493 00:32:33.493 --- 10.0.0.1 ping statistics --- 00:32:33.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.493 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3789099 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3789099 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3789099 ']' 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:33.493 20:39:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:33.493 [2024-07-22 20:39:45.383113] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:33.493 [2024-07-22 20:39:45.383218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.493 EAL: No free 2048 kB hugepages reported on node 1 00:32:33.493 [2024-07-22 20:39:45.487299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:33.754 [2024-07-22 20:39:45.671182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.754 [2024-07-22 20:39:45.671231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.754 [2024-07-22 20:39:45.671244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.754 [2024-07-22 20:39:45.671254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.754 [2024-07-22 20:39:45.671265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.754 [2024-07-22 20:39:45.671501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.754 [2024-07-22 20:39:45.671610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:33.754 [2024-07-22 20:39:45.671774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.754 [2024-07-22 20:39:45.671798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.326 [2024-07-22 20:39:46.160912] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.326 Malloc0 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.326 [2024-07-22 20:39:46.297570] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:34.326 [ 00:32:34.326 { 00:32:34.326 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:34.326 "subtype": "Discovery", 00:32:34.326 "listen_addresses": [ 00:32:34.326 { 00:32:34.326 "trtype": "TCP", 00:32:34.326 "adrfam": "IPv4", 00:32:34.326 "traddr": "10.0.0.2", 00:32:34.326 "trsvcid": "4420" 00:32:34.326 } 00:32:34.326 ], 00:32:34.326 "allow_any_host": true, 00:32:34.326 "hosts": [] 00:32:34.326 }, 00:32:34.326 { 00:32:34.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.326 "subtype": "NVMe", 00:32:34.326 "listen_addresses": [ 00:32:34.326 { 00:32:34.326 "trtype": "TCP", 00:32:34.326 "adrfam": "IPv4", 00:32:34.326 "traddr": "10.0.0.2", 00:32:34.326 "trsvcid": "4420" 00:32:34.326 } 00:32:34.326 ], 00:32:34.326 "allow_any_host": true, 00:32:34.326 "hosts": [], 00:32:34.326 "serial_number": "SPDK00000000000001", 00:32:34.326 "model_number": "SPDK bdev Controller", 00:32:34.326 "max_namespaces": 32, 00:32:34.326 "min_cntlid": 1, 00:32:34.326 "max_cntlid": 65519, 00:32:34.326 "namespaces": [ 00:32:34.326 { 00:32:34.326 "nsid": 1, 00:32:34.326 "bdev_name": "Malloc0", 00:32:34.326 "name": "Malloc0", 00:32:34.326 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:32:34.326 "eui64": "ABCDEF0123456789", 00:32:34.326 "uuid": "5148269f-cc7a-47a5-815e-361003b88579" 00:32:34.326 } 00:32:34.326 ] 00:32:34.326 } 00:32:34.326 ] 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.326 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:32:34.590 [2024-07-22 20:39:46.379073] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:34.590 [2024-07-22 20:39:46.379159] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789446 ] 00:32:34.590 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.590 [2024-07-22 20:39:46.433717] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:32:34.590 [2024-07-22 20:39:46.433811] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:34.590 [2024-07-22 20:39:46.433822] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:34.590 [2024-07-22 20:39:46.433841] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:34.590 [2024-07-22 20:39:46.433862] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:34.590 [2024-07-22 20:39:46.434393] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:32:34.590 [2024-07-22 20:39:46.434443] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025380 0 00:32:34.590 [2024-07-22 20:39:46.441221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:34.590 [2024-07-22 20:39:46.441245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:34.590 [2024-07-22 20:39:46.441253] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:34.590 [2024-07-22 20:39:46.441260] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:34.590 [2024-07-22 20:39:46.441314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.441327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.441335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.590 [2024-07-22 20:39:46.441360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:34.590 [2024-07-22 20:39:46.441386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.590 [2024-07-22 20:39:46.449217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.590 [2024-07-22 20:39:46.449247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.590 [2024-07-22 20:39:46.449258] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.449267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.590 [2024-07-22 20:39:46.449290] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:34.590 [2024-07-22 20:39:46.449305] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:32:34.590 [2024-07-22 20:39:46.449317] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:32:34.590 [2024-07-22 20:39:46.449344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.449353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.449362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.590 [2024-07-22 20:39:46.449377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.590 [2024-07-22 20:39:46.449399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.590 [2024-07-22 20:39:46.449645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.590 [2024-07-22 20:39:46.449656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.590 [2024-07-22 20:39:46.449663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.449672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.590 [2024-07-22 20:39:46.449682] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:32:34.590 [2024-07-22 20:39:46.449699] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:32:34.590 [2024-07-22 20:39:46.449710] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.449716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.449723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.590 [2024-07-22 20:39:46.449737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.590 [2024-07-22 20:39:46.449754] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.590 [2024-07-22 20:39:46.449866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.590 [2024-07-22 20:39:46.449876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.590 [2024-07-22 20:39:46.449881] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.449887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.590 [2024-07-22 20:39:46.449896] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:32:34.590 [2024-07-22 20:39:46.449908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:32:34.590 [2024-07-22 20:39:46.449919] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.449928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.449936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.590 [2024-07-22 20:39:46.449947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.590 [2024-07-22 20:39:46.449962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.590 [2024-07-22 20:39:46.450073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.590 [2024-07-22 20:39:46.450085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.590 [2024-07-22 20:39:46.450090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.450096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.590 [2024-07-22 20:39:46.450105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:34.590 [2024-07-22 20:39:46.450119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.450126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.450132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.590 [2024-07-22 20:39:46.450144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.590 [2024-07-22 20:39:46.450161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.590 [2024-07-22 20:39:46.450323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.590 [2024-07-22 20:39:46.450337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.590 [2024-07-22 20:39:46.450342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.450348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.590 [2024-07-22 20:39:46.450357] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:32:34.590 [2024-07-22 20:39:46.450365] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:32:34.590 [2024-07-22 20:39:46.450377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:34.590 [2024-07-22 20:39:46.450486] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:32:34.590 [2024-07-22 20:39:46.450493] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:34.590 [2024-07-22 20:39:46.450506] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.450513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.450519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.590 [2024-07-22 20:39:46.450530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.590 [2024-07-22 20:39:46.450548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.590 [2024-07-22 20:39:46.450752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.590 [2024-07-22 20:39:46.450762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.590 [2024-07-22 20:39:46.450767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.590 [2024-07-22 20:39:46.450779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.591 [2024-07-22 20:39:46.450788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:34.591 [2024-07-22 20:39:46.450803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.450811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.450817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.591 [2024-07-22 20:39:46.450828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.591 [2024-07-22 20:39:46.450846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.591 [2024-07-22 20:39:46.450988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.591 [2024-07-22 20:39:46.450998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.591 [2024-07-22 20:39:46.451003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.591 [2024-07-22 20:39:46.451017] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:34.591 [2024-07-22 20:39:46.451025] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:32:34.591 [2024-07-22 20:39:46.451037] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:32:34.591 [2024-07-22 20:39:46.451047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:32:34.591 [2024-07-22 20:39:46.451066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.591 [2024-07-22 20:39:46.451085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.591 [2024-07-22 20:39:46.451101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.591 [2024-07-22 20:39:46.451268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:34.591 [2024-07-22 20:39:46.451278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:34.591 [2024-07-22 20:39:46.451284] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451292] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=0 00:32:34.591 [2024-07-22 20:39:46.451300] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:34.591 [2024-07-22 20:39:46.451308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451324] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451332] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.591 [2024-07-22 20:39:46.451476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.591 [2024-07-22 20:39:46.451481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.591 [2024-07-22 20:39:46.451505] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:32:34.591 [2024-07-22 20:39:46.451516] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:32:34.591 [2024-07-22 20:39:46.451523] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:32:34.591 [2024-07-22 20:39:46.451532] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:32:34.591 [2024-07-22 20:39:46.451540] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:32:34.591 [2024-07-22 20:39:46.451548] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:32:34.591 [2024-07-22 20:39:46.451560] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:32:34.591 [2024-07-22 20:39:46.451574] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.591 [2024-07-22 20:39:46.451603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:34.591 [2024-07-22 20:39:46.451621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.591 [2024-07-22 20:39:46.451851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.591 [2024-07-22 20:39:46.451861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.591 [2024-07-22 20:39:46.451866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.591 [2024-07-22 20:39:46.451893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.591 [2024-07-22 20:39:46.451920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.591 [2024-07-22 20:39:46.451932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025380) 00:32:34.591 [2024-07-22 20:39:46.451955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.591 [2024-07-22 20:39:46.451963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025380) 00:32:34.591 [2024-07-22 20:39:46.451983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.591 [2024-07-22 20:39:46.451992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.451997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.452003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:34.591 [2024-07-22 20:39:46.452012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.591 [2024-07-22 20:39:46.452019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:32:34.591 [2024-07-22 20:39:46.452033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:34.591 [2024-07-22 20:39:46.452045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.452051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:34.591 [2024-07-22 20:39:46.452064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.591 [2024-07-22 20:39:46.452082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.591 [2024-07-22 20:39:46.452089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:32:34.591 [2024-07-22 20:39:46.452099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:32:34.591 [2024-07-22 20:39:46.452106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:34.591 [2024-07-22 20:39:46.452112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:34.591 [2024-07-22 20:39:46.452364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.591 [2024-07-22 20:39:46.452374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.591 [2024-07-22 20:39:46.452380] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.452386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:34.591 [2024-07-22 20:39:46.452395] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:32:34.591 [2024-07-22 20:39:46.452404] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:32:34.591 [2024-07-22 20:39:46.452426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.452434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:34.591 [2024-07-22 20:39:46.452445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.591 [2024-07-22 20:39:46.452461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:34.591 [2024-07-22 20:39:46.452813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:34.591 [2024-07-22 20:39:46.452826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:34.591 [2024-07-22 20:39:46.452834] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.452841] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=4 00:32:34.591 [2024-07-22 20:39:46.452849] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:34.591 [2024-07-22 20:39:46.452856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.452867] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.452873] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.452948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.591 [2024-07-22 20:39:46.452957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.591 [2024-07-22 20:39:46.452962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.452969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:34.591 [2024-07-22 20:39:46.452991] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:32:34.591 [2024-07-22 20:39:46.453032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.591 [2024-07-22 20:39:46.453040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:34.592 [2024-07-22 20:39:46.453053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.592 [2024-07-22 20:39:46.453063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.453069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.453078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:34.592 [2024-07-22 20:39:46.453091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.592 [2024-07-22 20:39:46.453109] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:34.592 [2024-07-22 20:39:46.453119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:34.592 [2024-07-22 20:39:46.457218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:34.592 [2024-07-22 20:39:46.457236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:34.592 [2024-07-22 20:39:46.457242] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.457249] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=1024, cccid=4 00:32:34.592 [2024-07-22 20:39:46.457261] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=1024 00:32:34.592 [2024-07-22 20:39:46.457269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.457282] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.457288] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.457300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.592 [2024-07-22 20:39:46.457310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.592 [2024-07-22 20:39:46.457316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.457323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:34.592 [2024-07-22 20:39:46.497214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.592 [2024-07-22 20:39:46.497234] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.592 [2024-07-22 20:39:46.497239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.497246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:34.592 [2024-07-22 20:39:46.497270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.497278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:34.592 [2024-07-22 20:39:46.497291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.592 [2024-07-22 20:39:46.497316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:34.592 [2024-07-22 20:39:46.497575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:34.592 [2024-07-22 20:39:46.497584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:34.592 [2024-07-22 20:39:46.497590] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.497603] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=3072, cccid=4 00:32:34.592 [2024-07-22 20:39:46.497610] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=3072 00:32:34.592 [2024-07-22 20:39:46.497616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.497670] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.497676] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.538422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.592 [2024-07-22 20:39:46.538441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.592 [2024-07-22 20:39:46.538446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.538453] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:34.592 [2024-07-22 20:39:46.538472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.538483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:34.592 [2024-07-22 20:39:46.538496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.592 [2024-07-22 20:39:46.538520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:34.592 [2024-07-22 20:39:46.538783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:34.592 [2024-07-22 20:39:46.538791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:34.592 [2024-07-22 20:39:46.538797] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.538803] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=8, cccid=4 00:32:34.592 [2024-07-22 20:39:46.538809] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=8 00:32:34.592 [2024-07-22 20:39:46.538816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.538828] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.538833] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.579416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.592 [2024-07-22 20:39:46.579434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.592 [2024-07-22 20:39:46.579440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.592 [2024-07-22 20:39:46.579446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:34.592 ===================================================== 00:32:34.592 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:34.592 ===================================================== 00:32:34.592 Controller Capabilities/Features 00:32:34.592 ================================ 00:32:34.592 Vendor ID: 0000 00:32:34.592 Subsystem Vendor ID: 0000 00:32:34.592 Serial Number: .................... 00:32:34.592 Model Number: ........................................ 00:32:34.592 Firmware Version: 24.09 00:32:34.592 Recommended Arb Burst: 0 00:32:34.592 IEEE OUI Identifier: 00 00 00 00:32:34.592 Multi-path I/O 00:32:34.592 May have multiple subsystem ports: No 00:32:34.592 May have multiple controllers: No 00:32:34.592 Associated with SR-IOV VF: No 00:32:34.592 Max Data Transfer Size: 131072 00:32:34.592 Max Number of Namespaces: 0 00:32:34.592 Max Number of I/O Queues: 1024 00:32:34.592 NVMe Specification Version (VS): 1.3 00:32:34.592 NVMe Specification Version (Identify): 1.3 00:32:34.592 Maximum Queue Entries: 128 00:32:34.592 Contiguous Queues Required: Yes 00:32:34.592 Arbitration Mechanisms Supported 00:32:34.592 Weighted Round Robin: Not Supported 00:32:34.592 Vendor Specific: Not Supported 00:32:34.592 Reset Timeout: 15000 ms 00:32:34.592 Doorbell Stride: 4 bytes 00:32:34.592 NVM Subsystem Reset: Not Supported 00:32:34.592 Command Sets Supported 00:32:34.592 NVM Command Set: Supported 00:32:34.592 Boot Partition: Not Supported 00:32:34.592 Memory Page Size Minimum: 4096 bytes 00:32:34.592 Memory Page Size Maximum: 4096 bytes 00:32:34.592 Persistent Memory Region: Not Supported 00:32:34.592 Optional Asynchronous Events Supported 00:32:34.592 Namespace Attribute Notices: Not Supported 00:32:34.592 Firmware Activation Notices: Not Supported 00:32:34.592 ANA Change Notices: Not Supported 00:32:34.592 PLE Aggregate Log Change Notices: Not Supported 00:32:34.592 LBA Status Info Alert Notices: Not Supported 00:32:34.592 EGE Aggregate Log Change Notices: Not Supported 00:32:34.592 Normal NVM Subsystem Shutdown event: Not Supported 00:32:34.592 Zone Descriptor Change Notices: Not Supported 00:32:34.592 Discovery Log Change Notices: Supported 00:32:34.592 Controller Attributes 00:32:34.592 128-bit Host Identifier: Not Supported 00:32:34.592 Non-Operational Permissive Mode: Not Supported 00:32:34.592 NVM Sets: Not Supported 00:32:34.592 Read Recovery Levels: Not Supported 00:32:34.592 Endurance Groups: Not Supported 00:32:34.592 Predictable Latency Mode: Not Supported 00:32:34.592 Traffic Based Keep ALive: Not Supported 00:32:34.592 Namespace Granularity: Not Supported 00:32:34.592 SQ Associations: Not Supported 00:32:34.592 UUID List: Not Supported 00:32:34.592 Multi-Domain Subsystem: Not Supported 00:32:34.592 Fixed Capacity Management: Not Supported 00:32:34.592 Variable Capacity Management: Not Supported 00:32:34.592 Delete Endurance Group: Not Supported 00:32:34.592 Delete NVM Set: Not Supported 00:32:34.592 Extended LBA Formats Supported: Not Supported 00:32:34.592 Flexible Data Placement Supported: Not Supported 00:32:34.592 00:32:34.592 Controller Memory Buffer Support 00:32:34.592 ================================ 00:32:34.592 Supported: No 00:32:34.592 00:32:34.592 Persistent Memory Region Support 00:32:34.592 ================================ 00:32:34.592 Supported: No 00:32:34.592 00:32:34.592 Admin Command Set Attributes 00:32:34.592 ============================ 00:32:34.592 Security Send/Receive: Not Supported 00:32:34.592 Format NVM: Not Supported 00:32:34.592 Firmware Activate/Download: Not Supported 00:32:34.592 Namespace Management: Not Supported 00:32:34.592 Device Self-Test: Not Supported 00:32:34.592 Directives: Not Supported 00:32:34.592 NVMe-MI: Not Supported 00:32:34.592 Virtualization Management: Not Supported 00:32:34.592 Doorbell Buffer Config: Not Supported 00:32:34.592 Get LBA Status Capability: Not Supported 00:32:34.592 Command & Feature Lockdown Capability: Not Supported 00:32:34.592 Abort Command Limit: 1 00:32:34.592 Async Event Request Limit: 4 00:32:34.592 Number of Firmware Slots: N/A 00:32:34.592 Firmware Slot 1 Read-Only: N/A 00:32:34.593 Firmware Activation Without Reset: N/A 00:32:34.593 Multiple Update Detection Support: N/A 00:32:34.593 Firmware Update Granularity: No Information Provided 00:32:34.593 Per-Namespace SMART Log: No 00:32:34.593 Asymmetric Namespace Access Log Page: Not Supported 00:32:34.593 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:34.593 Command Effects Log Page: Not Supported 00:32:34.593 Get Log Page Extended Data: Supported 00:32:34.593 Telemetry Log Pages: Not Supported 00:32:34.593 Persistent Event Log Pages: Not Supported 00:32:34.593 Supported Log Pages Log Page: May Support 00:32:34.593 Commands Supported & Effects Log Page: Not Supported 00:32:34.593 Feature Identifiers & Effects Log Page:May Support 00:32:34.593 NVMe-MI Commands & Effects Log Page: May Support 00:32:34.593 Data Area 4 for Telemetry Log: Not Supported 00:32:34.593 Error Log Page Entries Supported: 128 00:32:34.593 Keep Alive: Not Supported 00:32:34.593 00:32:34.593 NVM Command Set Attributes 00:32:34.593 ========================== 00:32:34.593 Submission Queue Entry Size 00:32:34.593 Max: 1 00:32:34.593 Min: 1 00:32:34.593 Completion Queue Entry Size 00:32:34.593 Max: 1 00:32:34.593 Min: 1 00:32:34.593 Number of Namespaces: 0 00:32:34.593 Compare Command: Not Supported 00:32:34.593 Write Uncorrectable Command: Not Supported 00:32:34.593 Dataset Management Command: Not Supported 00:32:34.593 Write Zeroes Command: Not Supported 00:32:34.593 Set Features Save Field: Not Supported 00:32:34.593 Reservations: Not Supported 00:32:34.593 Timestamp: Not Supported 00:32:34.593 Copy: Not Supported 00:32:34.593 Volatile Write Cache: Not Present 00:32:34.593 Atomic Write Unit (Normal): 1 00:32:34.593 Atomic Write Unit (PFail): 1 00:32:34.593 Atomic Compare & Write Unit: 1 00:32:34.593 Fused Compare & Write: Supported 00:32:34.593 Scatter-Gather List 00:32:34.593 SGL Command Set: Supported 00:32:34.593 SGL Keyed: Supported 00:32:34.593 SGL Bit Bucket Descriptor: Not Supported 00:32:34.593 SGL Metadata Pointer: Not Supported 00:32:34.593 Oversized SGL: Not Supported 00:32:34.593 SGL Metadata Address: Not Supported 00:32:34.593 SGL Offset: Supported 00:32:34.593 Transport SGL Data Block: Not Supported 00:32:34.593 Replay Protected Memory Block: Not Supported 00:32:34.593 00:32:34.593 Firmware Slot Information 00:32:34.593 ========================= 00:32:34.593 Active slot: 0 00:32:34.593 00:32:34.593 00:32:34.593 Error Log 00:32:34.593 ========= 00:32:34.593 00:32:34.593 Active Namespaces 00:32:34.593 ================= 00:32:34.593 Discovery Log Page 00:32:34.593 ================== 00:32:34.593 Generation Counter: 2 00:32:34.593 Number of Records: 2 00:32:34.593 Record Format: 0 00:32:34.593 00:32:34.593 Discovery Log Entry 0 00:32:34.593 ---------------------- 00:32:34.593 Transport Type: 3 (TCP) 00:32:34.593 Address Family: 1 (IPv4) 00:32:34.593 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:34.593 Entry Flags: 00:32:34.593 Duplicate Returned Information: 1 00:32:34.593 Explicit Persistent Connection Support for Discovery: 1 00:32:34.593 Transport Requirements: 00:32:34.593 Secure Channel: Not Required 00:32:34.593 Port ID: 0 (0x0000) 00:32:34.593 Controller ID: 65535 (0xffff) 00:32:34.593 Admin Max SQ Size: 128 00:32:34.593 Transport Service Identifier: 4420 00:32:34.593 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:34.593 Transport Address: 10.0.0.2 00:32:34.593 Discovery Log Entry 1 00:32:34.593 ---------------------- 00:32:34.593 Transport Type: 3 (TCP) 00:32:34.593 Address Family: 1 (IPv4) 00:32:34.593 Subsystem Type: 2 (NVM Subsystem) 00:32:34.593 Entry Flags: 00:32:34.593 Duplicate Returned Information: 0 00:32:34.593 Explicit Persistent Connection Support for Discovery: 0 00:32:34.593 Transport Requirements: 00:32:34.593 Secure Channel: Not Required 00:32:34.593 Port ID: 0 (0x0000) 00:32:34.593 Controller ID: 65535 (0xffff) 00:32:34.593 Admin Max SQ Size: 128 00:32:34.593 Transport Service Identifier: 4420 00:32:34.593 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:32:34.593 Transport Address: 10.0.0.2 [2024-07-22 20:39:46.579586] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:32:34.593 [2024-07-22 20:39:46.579602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.593 [2024-07-22 20:39:46.579615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.593 [2024-07-22 20:39:46.579624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025380 00:32:34.593 [2024-07-22 20:39:46.579632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.593 [2024-07-22 20:39:46.579639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025380 00:32:34.593 [2024-07-22 20:39:46.579647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.593 [2024-07-22 20:39:46.579654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:34.593 [2024-07-22 20:39:46.579662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.593 [2024-07-22 20:39:46.579674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.579681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.579688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:34.593 [2024-07-22 20:39:46.579703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.593 [2024-07-22 20:39:46.579724] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:34.593 [2024-07-22 20:39:46.579932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.593 [2024-07-22 20:39:46.579942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.593 [2024-07-22 20:39:46.579948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.579954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:34.593 [2024-07-22 20:39:46.579966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.579973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.579979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:34.593 [2024-07-22 20:39:46.579996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.593 [2024-07-22 20:39:46.580015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:34.593 [2024-07-22 20:39:46.580169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.593 [2024-07-22 20:39:46.580178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.593 [2024-07-22 20:39:46.580183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.580189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:34.593 [2024-07-22 20:39:46.580197] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:32:34.593 [2024-07-22 20:39:46.580212] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:32:34.593 [2024-07-22 20:39:46.580226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.580233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.580239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:34.593 [2024-07-22 20:39:46.580253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.593 [2024-07-22 20:39:46.580269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:34.593 [2024-07-22 20:39:46.580386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.593 [2024-07-22 20:39:46.580396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.593 [2024-07-22 20:39:46.580401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.580407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:34.593 [2024-07-22 20:39:46.580421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.580427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.580433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:34.593 [2024-07-22 20:39:46.580443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.593 [2024-07-22 20:39:46.580457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:34.593 [2024-07-22 20:39:46.580649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.593 [2024-07-22 20:39:46.580658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.593 [2024-07-22 20:39:46.580664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.580670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:34.593 [2024-07-22 20:39:46.580683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.580689] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.593 [2024-07-22 20:39:46.580695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:34.593 [2024-07-22 20:39:46.580705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.594 [2024-07-22 20:39:46.580718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:34.594 [2024-07-22 20:39:46.580897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.594 [2024-07-22 20:39:46.580906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.594 [2024-07-22 20:39:46.580911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.594 [2024-07-22 20:39:46.580917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:34.594 [2024-07-22 20:39:46.580933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.594 [2024-07-22 20:39:46.580940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.594 [2024-07-22 20:39:46.580945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:34.594 [2024-07-22 20:39:46.580955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.594 [2024-07-22 20:39:46.580969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:34.594 [2024-07-22 20:39:46.581195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.594 [2024-07-22 20:39:46.585215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.594 [2024-07-22 20:39:46.585225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.594 [2024-07-22 20:39:46.585231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:34.594 [2024-07-22 20:39:46.585250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.594 [2024-07-22 20:39:46.585256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.594 [2024-07-22 20:39:46.585262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:34.594 [2024-07-22 20:39:46.585273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.594 [2024-07-22 20:39:46.585291] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:34.594 [2024-07-22 20:39:46.585527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.594 [2024-07-22 20:39:46.585536] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.594 [2024-07-22 20:39:46.585541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.594 [2024-07-22 20:39:46.585547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:34.594 [2024-07-22 20:39:46.585559] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:32:34.856 00:32:34.856 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:32:34.856 [2024-07-22 20:39:46.682118] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:34.856 [2024-07-22 20:39:46.682210] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789455 ] 00:32:34.856 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.856 [2024-07-22 20:39:46.736605] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:32:34.856 [2024-07-22 20:39:46.736695] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:34.856 [2024-07-22 20:39:46.736706] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:34.856 [2024-07-22 20:39:46.736725] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:34.856 [2024-07-22 20:39:46.736744] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:34.856 [2024-07-22 20:39:46.737066] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:32:34.856 [2024-07-22 20:39:46.737106] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025380 0 00:32:34.856 [2024-07-22 20:39:46.751219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:34.856 [2024-07-22 20:39:46.751245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:34.856 [2024-07-22 20:39:46.751254] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:34.856 [2024-07-22 20:39:46.751260] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:34.856 [2024-07-22 20:39:46.751311] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.856 [2024-07-22 20:39:46.751323] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.856 [2024-07-22 20:39:46.751331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.856 [2024-07-22 20:39:46.751357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:34.856 [2024-07-22 20:39:46.751384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.856 [2024-07-22 20:39:46.759216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.856 [2024-07-22 20:39:46.759242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.856 [2024-07-22 20:39:46.759249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.856 [2024-07-22 20:39:46.759258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.856 [2024-07-22 20:39:46.759275] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:34.856 [2024-07-22 20:39:46.759291] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:32:34.857 [2024-07-22 20:39:46.759301] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:32:34.857 [2024-07-22 20:39:46.759317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.759325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.759332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.857 [2024-07-22 20:39:46.759346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.857 [2024-07-22 20:39:46.759368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.857 [2024-07-22 20:39:46.759474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.857 [2024-07-22 20:39:46.759486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.857 [2024-07-22 20:39:46.759494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.759502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.857 [2024-07-22 20:39:46.759511] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:32:34.857 [2024-07-22 20:39:46.759523] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:32:34.857 [2024-07-22 20:39:46.759536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.759543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.759550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.857 [2024-07-22 20:39:46.759566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.857 [2024-07-22 20:39:46.759583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.857 [2024-07-22 20:39:46.759659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.857 [2024-07-22 20:39:46.759671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.857 [2024-07-22 20:39:46.759676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.759682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.857 [2024-07-22 20:39:46.759694] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:32:34.857 [2024-07-22 20:39:46.759708] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:32:34.857 [2024-07-22 20:39:46.759718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.759725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.759732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.857 [2024-07-22 20:39:46.759744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.857 [2024-07-22 20:39:46.759759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.857 [2024-07-22 20:39:46.759836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.857 [2024-07-22 20:39:46.759846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.857 [2024-07-22 20:39:46.759851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.759857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.857 [2024-07-22 20:39:46.759866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:34.857 [2024-07-22 20:39:46.759880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.759887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.759894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.857 [2024-07-22 20:39:46.759907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.857 [2024-07-22 20:39:46.759923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.857 [2024-07-22 20:39:46.759994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.857 [2024-07-22 20:39:46.760003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.857 [2024-07-22 20:39:46.760009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.857 [2024-07-22 20:39:46.760023] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:32:34.857 [2024-07-22 20:39:46.760031] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:32:34.857 [2024-07-22 20:39:46.760044] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:34.857 [2024-07-22 20:39:46.760153] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:32:34.857 [2024-07-22 20:39:46.760160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:34.857 [2024-07-22 20:39:46.760172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.857 [2024-07-22 20:39:46.760198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.857 [2024-07-22 20:39:46.760221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.857 [2024-07-22 20:39:46.760300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.857 [2024-07-22 20:39:46.760309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.857 [2024-07-22 20:39:46.760316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.857 [2024-07-22 20:39:46.760336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:34.857 [2024-07-22 20:39:46.760351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.857 [2024-07-22 20:39:46.760376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.857 [2024-07-22 20:39:46.760391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.857 [2024-07-22 20:39:46.760466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.857 [2024-07-22 20:39:46.760476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.857 [2024-07-22 20:39:46.760481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.857 [2024-07-22 20:39:46.760495] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:34.857 [2024-07-22 20:39:46.760503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:32:34.857 [2024-07-22 20:39:46.760515] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:32:34.857 [2024-07-22 20:39:46.760535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:32:34.857 [2024-07-22 20:39:46.760552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.857 [2024-07-22 20:39:46.760572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.857 [2024-07-22 20:39:46.760587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.857 [2024-07-22 20:39:46.760692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:34.857 [2024-07-22 20:39:46.760702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:34.857 [2024-07-22 20:39:46.760707] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760715] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=0 00:32:34.857 [2024-07-22 20:39:46.760723] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:34.857 [2024-07-22 20:39:46.760731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760771] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760779] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.857 [2024-07-22 20:39:46.760858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.857 [2024-07-22 20:39:46.760863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.857 [2024-07-22 20:39:46.760886] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:32:34.857 [2024-07-22 20:39:46.760899] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:32:34.857 [2024-07-22 20:39:46.760906] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:32:34.857 [2024-07-22 20:39:46.760914] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:32:34.857 [2024-07-22 20:39:46.760922] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:32:34.857 [2024-07-22 20:39:46.760930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:32:34.857 [2024-07-22 20:39:46.760942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:32:34.857 [2024-07-22 20:39:46.760953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.857 [2024-07-22 20:39:46.760968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.857 [2024-07-22 20:39:46.760984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:34.857 [2024-07-22 20:39:46.761000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.857 [2024-07-22 20:39:46.761080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.858 [2024-07-22 20:39:46.761089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.858 [2024-07-22 20:39:46.761097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:34.858 [2024-07-22 20:39:46.761116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025380) 00:32:34.858 [2024-07-22 20:39:46.761143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.858 [2024-07-22 20:39:46.761152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025380) 00:32:34.858 [2024-07-22 20:39:46.761173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.858 [2024-07-22 20:39:46.761181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025380) 00:32:34.858 [2024-07-22 20:39:46.761211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.858 [2024-07-22 20:39:46.761219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:34.858 [2024-07-22 20:39:46.761240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.858 [2024-07-22 20:39:46.761247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:32:34.858 [2024-07-22 20:39:46.761263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:34.858 [2024-07-22 20:39:46.761275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:34.858 [2024-07-22 20:39:46.761295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.858 [2024-07-22 20:39:46.761312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:32:34.858 [2024-07-22 20:39:46.761322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:32:34.858 [2024-07-22 20:39:46.761329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:32:34.858 [2024-07-22 20:39:46.761336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:34.858 [2024-07-22 20:39:46.761343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:34.858 [2024-07-22 20:39:46.761438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.858 [2024-07-22 20:39:46.761451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.858 [2024-07-22 20:39:46.761456] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761463] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:34.858 [2024-07-22 20:39:46.761471] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:32:34.858 [2024-07-22 20:39:46.761480] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:32:34.858 [2024-07-22 20:39:46.761493] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:32:34.858 [2024-07-22 20:39:46.761503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:32:34.858 [2024-07-22 20:39:46.761513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:34.858 [2024-07-22 20:39:46.761537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:34.858 [2024-07-22 20:39:46.761551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:34.858 [2024-07-22 20:39:46.761633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.858 [2024-07-22 20:39:46.761642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.858 [2024-07-22 20:39:46.761647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:34.858 [2024-07-22 20:39:46.761738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:32:34.858 [2024-07-22 20:39:46.761757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:32:34.858 [2024-07-22 20:39:46.761772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:34.858 [2024-07-22 20:39:46.761791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.858 [2024-07-22 20:39:46.761805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:34.858 [2024-07-22 20:39:46.761896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:34.858 [2024-07-22 20:39:46.761906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:34.858 [2024-07-22 20:39:46.761913] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761920] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=4 00:32:34.858 [2024-07-22 20:39:46.761927] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:34.858 [2024-07-22 20:39:46.761934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761970] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.761977] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.802290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.858 [2024-07-22 20:39:46.802310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.858 [2024-07-22 20:39:46.802316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.802323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:34.858 [2024-07-22 20:39:46.802353] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:32:34.858 [2024-07-22 20:39:46.802370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:32:34.858 [2024-07-22 20:39:46.802385] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:32:34.858 [2024-07-22 20:39:46.802399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.802406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:34.858 [2024-07-22 20:39:46.802421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.858 [2024-07-22 20:39:46.802439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:34.858 [2024-07-22 20:39:46.802538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:34.858 [2024-07-22 20:39:46.802548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:34.858 [2024-07-22 20:39:46.802553] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.802560] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=4 00:32:34.858 [2024-07-22 20:39:46.802567] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:34.858 [2024-07-22 20:39:46.802573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.802608] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.802615] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.847213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:34.858 [2024-07-22 20:39:46.847233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:34.858 [2024-07-22 20:39:46.847239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.847245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:34.858 [2024-07-22 20:39:46.847270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:32:34.858 [2024-07-22 20:39:46.847292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:32:34.858 [2024-07-22 20:39:46.847307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:34.858 [2024-07-22 20:39:46.847314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:34.859 [2024-07-22 20:39:46.847330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.859 [2024-07-22 20:39:46.847350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:34.859 [2024-07-22 20:39:46.847455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:34.859 [2024-07-22 20:39:46.847464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:34.859 [2024-07-22 20:39:46.847470] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:34.859 [2024-07-22 20:39:46.847476] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=4 00:32:34.859 [2024-07-22 20:39:46.847483] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:34.859 [2024-07-22 20:39:46.847489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:34.859 [2024-07-22 20:39:46.847524] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:34.859 [2024-07-22 20:39:46.847531] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.122 [2024-07-22 20:39:46.888296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.122 [2024-07-22 20:39:46.888301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888308] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:35.122 [2024-07-22 20:39:46.888325] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:32:35.122 [2024-07-22 20:39:46.888338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:32:35.122 [2024-07-22 20:39:46.888351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:32:35.122 [2024-07-22 20:39:46.888361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:32:35.122 [2024-07-22 20:39:46.888372] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:32:35.122 [2024-07-22 20:39:46.888380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:32:35.122 [2024-07-22 20:39:46.888389] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:32:35.122 [2024-07-22 20:39:46.888396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:32:35.122 [2024-07-22 20:39:46.888404] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:32:35.122 [2024-07-22 20:39:46.888436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:35.122 [2024-07-22 20:39:46.888458] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.122 [2024-07-22 20:39:46.888468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:35.122 [2024-07-22 20:39:46.888492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.122 [2024-07-22 20:39:46.888512] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:35.122 [2024-07-22 20:39:46.888524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:35.122 [2024-07-22 20:39:46.888617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.122 [2024-07-22 20:39:46.888627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.122 [2024-07-22 20:39:46.888633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:35.122 [2024-07-22 20:39:46.888653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.122 [2024-07-22 20:39:46.888662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.122 [2024-07-22 20:39:46.888667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:35.122 [2024-07-22 20:39:46.888685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:35.122 [2024-07-22 20:39:46.888701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.122 [2024-07-22 20:39:46.888715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:35.122 [2024-07-22 20:39:46.888795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.122 [2024-07-22 20:39:46.888804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.122 [2024-07-22 20:39:46.888810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:35.122 [2024-07-22 20:39:46.888828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:35.122 [2024-07-22 20:39:46.888844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.122 [2024-07-22 20:39:46.888857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:35.122 [2024-07-22 20:39:46.888925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.122 [2024-07-22 20:39:46.888935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.122 [2024-07-22 20:39:46.888940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:35.122 [2024-07-22 20:39:46.888958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.888964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:35.122 [2024-07-22 20:39:46.888974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.122 [2024-07-22 20:39:46.888987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:35.122 [2024-07-22 20:39:46.889057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.122 [2024-07-22 20:39:46.889066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.122 [2024-07-22 20:39:46.889072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.889077] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:35.122 [2024-07-22 20:39:46.889102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.889109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025380) 00:32:35.122 [2024-07-22 20:39:46.889123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.122 [2024-07-22 20:39:46.889135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.889142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025380) 00:32:35.122 [2024-07-22 20:39:46.889153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.122 [2024-07-22 20:39:46.889166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.889172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000025380) 00:32:35.122 [2024-07-22 20:39:46.889183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.122 [2024-07-22 20:39:46.889196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.122 [2024-07-22 20:39:46.889211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025380) 00:32:35.122 [2024-07-22 20:39:46.889223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.123 [2024-07-22 20:39:46.889240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:32:35.123 [2024-07-22 20:39:46.889249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:32:35.123 [2024-07-22 20:39:46.889256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:32:35.123 [2024-07-22 20:39:46.889262] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:32:35.123 [2024-07-22 20:39:46.889405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.123 [2024-07-22 20:39:46.889415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.123 [2024-07-22 20:39:46.889421] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889428] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=8192, cccid=5 00:32:35.123 [2024-07-22 20:39:46.889440] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000025380): expected_datao=0, payload_size=8192 00:32:35.123 [2024-07-22 20:39:46.889447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889496] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889504] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.123 [2024-07-22 20:39:46.889521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.123 [2024-07-22 20:39:46.889527] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889533] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=512, cccid=4 00:32:35.123 [2024-07-22 20:39:46.889539] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025380): expected_datao=0, payload_size=512 00:32:35.123 [2024-07-22 20:39:46.889545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889559] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889564] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.123 [2024-07-22 20:39:46.889580] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.123 [2024-07-22 20:39:46.889585] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889591] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=512, cccid=6 00:32:35.123 [2024-07-22 20:39:46.889599] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000025380): expected_datao=0, payload_size=512 00:32:35.123 [2024-07-22 20:39:46.889605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889614] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889619] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:35.123 [2024-07-22 20:39:46.889635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:35.123 [2024-07-22 20:39:46.889640] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889646] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025380): datao=0, datal=4096, cccid=7 00:32:35.123 [2024-07-22 20:39:46.889652] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000025380): expected_datao=0, payload_size=4096 00:32:35.123 [2024-07-22 20:39:46.889658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889707] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.889713] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.930291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.123 [2024-07-22 20:39:46.930310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.123 [2024-07-22 20:39:46.930316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.930323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025380 00:32:35.123 [2024-07-22 20:39:46.930349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.123 [2024-07-22 20:39:46.930358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.123 [2024-07-22 20:39:46.930363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.930369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025380 00:32:35.123 [2024-07-22 20:39:46.930384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.123 [2024-07-22 20:39:46.930392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.123 [2024-07-22 20:39:46.930397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.930403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000025380 00:32:35.123 [2024-07-22 20:39:46.930414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.123 [2024-07-22 20:39:46.930427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.123 [2024-07-22 20:39:46.930432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.123 [2024-07-22 20:39:46.930438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025380 00:32:35.123 ===================================================== 00:32:35.123 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:35.123 ===================================================== 00:32:35.123 Controller Capabilities/Features 00:32:35.123 ================================ 00:32:35.123 Vendor ID: 8086 00:32:35.123 Subsystem Vendor ID: 8086 00:32:35.123 Serial Number: SPDK00000000000001 00:32:35.123 Model Number: SPDK bdev Controller 00:32:35.123 Firmware Version: 24.09 00:32:35.123 Recommended Arb Burst: 6 00:32:35.123 IEEE OUI Identifier: e4 d2 5c 00:32:35.123 Multi-path I/O 00:32:35.123 May have multiple subsystem ports: Yes 00:32:35.123 May have multiple controllers: Yes 00:32:35.123 Associated with SR-IOV VF: No 00:32:35.123 Max Data Transfer Size: 131072 00:32:35.123 Max Number of Namespaces: 32 00:32:35.123 Max Number of I/O Queues: 127 00:32:35.123 NVMe Specification Version (VS): 1.3 00:32:35.123 NVMe Specification Version (Identify): 1.3 00:32:35.123 Maximum Queue Entries: 128 00:32:35.123 Contiguous Queues Required: Yes 00:32:35.123 Arbitration Mechanisms Supported 00:32:35.123 Weighted Round Robin: Not Supported 00:32:35.123 Vendor Specific: Not Supported 00:32:35.123 Reset Timeout: 15000 ms 00:32:35.123 Doorbell Stride: 4 bytes 00:32:35.123 NVM Subsystem Reset: Not Supported 00:32:35.123 Command Sets Supported 00:32:35.123 NVM Command Set: Supported 00:32:35.123 Boot Partition: Not Supported 00:32:35.123 Memory Page Size Minimum: 4096 bytes 00:32:35.123 Memory Page Size Maximum: 4096 bytes 00:32:35.123 Persistent Memory Region: Not Supported 00:32:35.123 Optional Asynchronous Events Supported 00:32:35.123 Namespace Attribute Notices: Supported 00:32:35.123 Firmware Activation Notices: Not Supported 00:32:35.123 ANA Change Notices: Not Supported 00:32:35.124 PLE Aggregate Log Change Notices: Not Supported 00:32:35.124 LBA Status Info Alert Notices: Not Supported 00:32:35.124 EGE Aggregate Log Change Notices: Not Supported 00:32:35.124 Normal NVM Subsystem Shutdown event: Not Supported 00:32:35.124 Zone Descriptor Change Notices: Not Supported 00:32:35.124 Discovery Log Change Notices: Not Supported 00:32:35.124 Controller Attributes 00:32:35.124 128-bit Host Identifier: Supported 00:32:35.124 Non-Operational Permissive Mode: Not Supported 00:32:35.124 NVM Sets: Not Supported 00:32:35.124 Read Recovery Levels: Not Supported 00:32:35.124 Endurance Groups: Not Supported 00:32:35.124 Predictable Latency Mode: Not Supported 00:32:35.124 Traffic Based Keep ALive: Not Supported 00:32:35.124 Namespace Granularity: Not Supported 00:32:35.124 SQ Associations: Not Supported 00:32:35.124 UUID List: Not Supported 00:32:35.124 Multi-Domain Subsystem: Not Supported 00:32:35.124 Fixed Capacity Management: Not Supported 00:32:35.124 Variable Capacity Management: Not Supported 00:32:35.124 Delete Endurance Group: Not Supported 00:32:35.124 Delete NVM Set: Not Supported 00:32:35.124 Extended LBA Formats Supported: Not Supported 00:32:35.124 Flexible Data Placement Supported: Not Supported 00:32:35.124 00:32:35.124 Controller Memory Buffer Support 00:32:35.124 ================================ 00:32:35.124 Supported: No 00:32:35.124 00:32:35.124 Persistent Memory Region Support 00:32:35.124 ================================ 00:32:35.124 Supported: No 00:32:35.124 00:32:35.124 Admin Command Set Attributes 00:32:35.124 ============================ 00:32:35.124 Security Send/Receive: Not Supported 00:32:35.124 Format NVM: Not Supported 00:32:35.124 Firmware Activate/Download: Not Supported 00:32:35.124 Namespace Management: Not Supported 00:32:35.124 Device Self-Test: Not Supported 00:32:35.124 Directives: Not Supported 00:32:35.124 NVMe-MI: Not Supported 00:32:35.124 Virtualization Management: Not Supported 00:32:35.124 Doorbell Buffer Config: Not Supported 00:32:35.124 Get LBA Status Capability: Not Supported 00:32:35.124 Command & Feature Lockdown Capability: Not Supported 00:32:35.124 Abort Command Limit: 4 00:32:35.124 Async Event Request Limit: 4 00:32:35.124 Number of Firmware Slots: N/A 00:32:35.124 Firmware Slot 1 Read-Only: N/A 00:32:35.124 Firmware Activation Without Reset: N/A 00:32:35.124 Multiple Update Detection Support: N/A 00:32:35.124 Firmware Update Granularity: No Information Provided 00:32:35.124 Per-Namespace SMART Log: No 00:32:35.124 Asymmetric Namespace Access Log Page: Not Supported 00:32:35.124 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:32:35.124 Command Effects Log Page: Supported 00:32:35.124 Get Log Page Extended Data: Supported 00:32:35.124 Telemetry Log Pages: Not Supported 00:32:35.124 Persistent Event Log Pages: Not Supported 00:32:35.124 Supported Log Pages Log Page: May Support 00:32:35.124 Commands Supported & Effects Log Page: Not Supported 00:32:35.124 Feature Identifiers & Effects Log Page:May Support 00:32:35.124 NVMe-MI Commands & Effects Log Page: May Support 00:32:35.124 Data Area 4 for Telemetry Log: Not Supported 00:32:35.124 Error Log Page Entries Supported: 128 00:32:35.124 Keep Alive: Supported 00:32:35.124 Keep Alive Granularity: 10000 ms 00:32:35.124 00:32:35.124 NVM Command Set Attributes 00:32:35.124 ========================== 00:32:35.124 Submission Queue Entry Size 00:32:35.124 Max: 64 00:32:35.124 Min: 64 00:32:35.124 Completion Queue Entry Size 00:32:35.124 Max: 16 00:32:35.124 Min: 16 00:32:35.124 Number of Namespaces: 32 00:32:35.124 Compare Command: Supported 00:32:35.124 Write Uncorrectable Command: Not Supported 00:32:35.124 Dataset Management Command: Supported 00:32:35.124 Write Zeroes Command: Supported 00:32:35.124 Set Features Save Field: Not Supported 00:32:35.124 Reservations: Supported 00:32:35.124 Timestamp: Not Supported 00:32:35.124 Copy: Supported 00:32:35.124 Volatile Write Cache: Present 00:32:35.124 Atomic Write Unit (Normal): 1 00:32:35.124 Atomic Write Unit (PFail): 1 00:32:35.124 Atomic Compare & Write Unit: 1 00:32:35.124 Fused Compare & Write: Supported 00:32:35.124 Scatter-Gather List 00:32:35.124 SGL Command Set: Supported 00:32:35.124 SGL Keyed: Supported 00:32:35.124 SGL Bit Bucket Descriptor: Not Supported 00:32:35.124 SGL Metadata Pointer: Not Supported 00:32:35.124 Oversized SGL: Not Supported 00:32:35.124 SGL Metadata Address: Not Supported 00:32:35.124 SGL Offset: Supported 00:32:35.124 Transport SGL Data Block: Not Supported 00:32:35.124 Replay Protected Memory Block: Not Supported 00:32:35.124 00:32:35.124 Firmware Slot Information 00:32:35.124 ========================= 00:32:35.124 Active slot: 1 00:32:35.124 Slot 1 Firmware Revision: 24.09 00:32:35.124 00:32:35.124 00:32:35.124 Commands Supported and Effects 00:32:35.124 ============================== 00:32:35.124 Admin Commands 00:32:35.124 -------------- 00:32:35.124 Get Log Page (02h): Supported 00:32:35.124 Identify (06h): Supported 00:32:35.124 Abort (08h): Supported 00:32:35.124 Set Features (09h): Supported 00:32:35.124 Get Features (0Ah): Supported 00:32:35.124 Asynchronous Event Request (0Ch): Supported 00:32:35.124 Keep Alive (18h): Supported 00:32:35.124 I/O Commands 00:32:35.124 ------------ 00:32:35.124 Flush (00h): Supported LBA-Change 00:32:35.124 Write (01h): Supported LBA-Change 00:32:35.124 Read (02h): Supported 00:32:35.124 Compare (05h): Supported 00:32:35.124 Write Zeroes (08h): Supported LBA-Change 00:32:35.124 Dataset Management (09h): Supported LBA-Change 00:32:35.124 Copy (19h): Supported LBA-Change 00:32:35.125 00:32:35.125 Error Log 00:32:35.125 ========= 00:32:35.125 00:32:35.125 Arbitration 00:32:35.125 =========== 00:32:35.125 Arbitration Burst: 1 00:32:35.125 00:32:35.125 Power Management 00:32:35.125 ================ 00:32:35.125 Number of Power States: 1 00:32:35.125 Current Power State: Power State #0 00:32:35.125 Power State #0: 00:32:35.125 Max Power: 0.00 W 00:32:35.125 Non-Operational State: Operational 00:32:35.125 Entry Latency: Not Reported 00:32:35.125 Exit Latency: Not Reported 00:32:35.125 Relative Read Throughput: 0 00:32:35.125 Relative Read Latency: 0 00:32:35.125 Relative Write Throughput: 0 00:32:35.125 Relative Write Latency: 0 00:32:35.125 Idle Power: Not Reported 00:32:35.125 Active Power: Not Reported 00:32:35.125 Non-Operational Permissive Mode: Not Supported 00:32:35.125 00:32:35.125 Health Information 00:32:35.125 ================== 00:32:35.125 Critical Warnings: 00:32:35.125 Available Spare Space: OK 00:32:35.125 Temperature: OK 00:32:35.125 Device Reliability: OK 00:32:35.125 Read Only: No 00:32:35.125 Volatile Memory Backup: OK 00:32:35.125 Current Temperature: 0 Kelvin (-273 Celsius) 00:32:35.125 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:32:35.125 Available Spare: 0% 00:32:35.125 Available Spare Threshold: 0% 00:32:35.125 Life Percentage Used:[2024-07-22 20:39:46.930599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.125 [2024-07-22 20:39:46.930609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025380) 00:32:35.125 [2024-07-22 20:39:46.930622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.125 [2024-07-22 20:39:46.930641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:32:35.125 [2024-07-22 20:39:46.930718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.125 [2024-07-22 20:39:46.930731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.125 [2024-07-22 20:39:46.930737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.125 [2024-07-22 20:39:46.930744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025380 00:32:35.125 [2024-07-22 20:39:46.930792] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:32:35.125 [2024-07-22 20:39:46.930808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025380 00:32:35.125 [2024-07-22 20:39:46.930820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.125 [2024-07-22 20:39:46.930828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025380 00:32:35.125 [2024-07-22 20:39:46.930836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.125 [2024-07-22 20:39:46.930844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025380 00:32:35.125 [2024-07-22 20:39:46.930852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.125 [2024-07-22 20:39:46.930859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:35.125 [2024-07-22 20:39:46.930867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.125 [2024-07-22 20:39:46.930879] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.125 [2024-07-22 20:39:46.930886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.125 [2024-07-22 20:39:46.930892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:35.125 [2024-07-22 20:39:46.930904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.125 [2024-07-22 20:39:46.930922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:35.125 [2024-07-22 20:39:46.931050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.125 [2024-07-22 20:39:46.931060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.125 [2024-07-22 20:39:46.931066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.125 [2024-07-22 20:39:46.931073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:35.125 [2024-07-22 20:39:46.931085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.125 [2024-07-22 20:39:46.931092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.125 [2024-07-22 20:39:46.931103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:35.125 [2024-07-22 20:39:46.931114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.125 [2024-07-22 20:39:46.931133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:35.125 [2024-07-22 20:39:46.935214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.125 [2024-07-22 20:39:46.935232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.125 [2024-07-22 20:39:46.935238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.125 [2024-07-22 20:39:46.935244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:35.125 [2024-07-22 20:39:46.935253] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:32:35.125 [2024-07-22 20:39:46.935262] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:32:35.125 [2024-07-22 20:39:46.935280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:35.125 [2024-07-22 20:39:46.935287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:35.125 [2024-07-22 20:39:46.935294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025380) 00:32:35.125 [2024-07-22 20:39:46.935311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.125 [2024-07-22 20:39:46.935331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:32:35.125 [2024-07-22 20:39:46.935424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:35.125 [2024-07-22 20:39:46.935433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:35.125 [2024-07-22 20:39:46.935439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:35.125 [2024-07-22 20:39:46.935445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025380 00:32:35.125 [2024-07-22 20:39:46.935457] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:32:35.125 0% 00:32:35.125 Data Units Read: 0 00:32:35.125 Data Units Written: 0 00:32:35.125 Host Read Commands: 0 00:32:35.125 Host Write Commands: 0 00:32:35.125 Controller Busy Time: 0 minutes 00:32:35.125 Power Cycles: 0 00:32:35.125 Power On Hours: 0 hours 00:32:35.125 Unsafe Shutdowns: 0 00:32:35.125 Unrecoverable Media Errors: 0 00:32:35.126 Lifetime Error Log Entries: 0 00:32:35.126 Warning Temperature Time: 0 minutes 00:32:35.126 Critical Temperature Time: 0 minutes 00:32:35.126 00:32:35.126 Number of Queues 00:32:35.126 ================ 00:32:35.126 Number of I/O Submission Queues: 127 00:32:35.126 Number of I/O Completion Queues: 127 00:32:35.126 00:32:35.126 Active Namespaces 00:32:35.126 ================= 00:32:35.126 Namespace ID:1 00:32:35.126 Error Recovery Timeout: Unlimited 00:32:35.126 Command Set Identifier: NVM (00h) 00:32:35.126 Deallocate: Supported 00:32:35.126 Deallocated/Unwritten Error: Not Supported 00:32:35.126 Deallocated Read Value: Unknown 00:32:35.126 Deallocate in Write Zeroes: Not Supported 00:32:35.126 Deallocated Guard Field: 0xFFFF 00:32:35.126 Flush: Supported 00:32:35.126 Reservation: Supported 00:32:35.126 Namespace Sharing Capabilities: Multiple Controllers 00:32:35.126 Size (in LBAs): 131072 (0GiB) 00:32:35.126 Capacity (in LBAs): 131072 (0GiB) 00:32:35.126 Utilization (in LBAs): 131072 (0GiB) 00:32:35.126 NGUID: ABCDEF0123456789ABCDEF0123456789 00:32:35.126 EUI64: ABCDEF0123456789 00:32:35.126 UUID: 5148269f-cc7a-47a5-815e-361003b88579 00:32:35.126 Thin Provisioning: Not Supported 00:32:35.126 Per-NS Atomic Units: Yes 00:32:35.126 Atomic Boundary Size (Normal): 0 00:32:35.126 Atomic Boundary Size (PFail): 0 00:32:35.126 Atomic Boundary Offset: 0 00:32:35.126 Maximum Single Source Range Length: 65535 00:32:35.126 Maximum Copy Length: 65535 00:32:35.126 Maximum Source Range Count: 1 00:32:35.126 NGUID/EUI64 Never Reused: No 00:32:35.126 Namespace Write Protected: No 00:32:35.126 Number of LBA Formats: 1 00:32:35.126 Current LBA Format: LBA Format #00 00:32:35.126 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:35.126 00:32:35.126 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:32:35.126 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:35.126 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.126 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:35.126 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.126 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:32:35.126 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:32:35.126 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:35.126 20:39:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:35.126 rmmod nvme_tcp 00:32:35.126 rmmod nvme_fabrics 00:32:35.126 rmmod nvme_keyring 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3789099 ']' 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3789099 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3789099 ']' 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3789099 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3789099 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3789099' 00:32:35.126 killing process with pid 3789099 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3789099 00:32:35.126 20:39:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3789099 00:32:36.107 20:39:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:36.107 20:39:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:36.107 20:39:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:36.107 20:39:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:36.107 20:39:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:36.107 20:39:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.107 20:39:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.107 20:39:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:38.651 00:32:38.651 real 0m12.142s 00:32:38.651 user 0m10.827s 00:32:38.651 sys 0m5.902s 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:38.651 ************************************ 00:32:38.651 END TEST nvmf_identify 00:32:38.651 ************************************ 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.651 ************************************ 00:32:38.651 START TEST nvmf_perf 00:32:38.651 ************************************ 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:38.651 * Looking for test storage... 00:32:38.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:32:38.651 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:38.652 20:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:46.794 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:46.794 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:46.794 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:46.794 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:46.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:32:46.794 00:32:46.794 --- 10.0.0.2 ping statistics --- 00:32:46.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.794 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:32:46.794 00:32:46.794 --- 10.0.0.1 ping statistics --- 00:32:46.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.794 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:46.794 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:46.795 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3793778 00:32:46.795 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3793778 00:32:46.795 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:46.795 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3793778 ']' 00:32:46.795 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.795 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:46.795 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.795 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:46.795 20:39:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:46.795 [2024-07-22 20:39:57.828320] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:46.795 [2024-07-22 20:39:57.828444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.795 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.795 [2024-07-22 20:39:57.962127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:46.795 [2024-07-22 20:39:58.145171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.795 [2024-07-22 20:39:58.145220] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.795 [2024-07-22 20:39:58.145233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.795 [2024-07-22 20:39:58.145243] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.795 [2024-07-22 20:39:58.145254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.795 [2024-07-22 20:39:58.145432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.795 [2024-07-22 20:39:58.145516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.795 [2024-07-22 20:39:58.145649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.795 [2024-07-22 20:39:58.145675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:46.795 20:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:46.795 20:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:32:46.795 20:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:46.795 20:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:46.795 20:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:46.795 20:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.795 20:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:46.795 20:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:32:47.366 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:32:47.366 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:32:47.366 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:32:47.366 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.627 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:32:47.627 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:32:47.627 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:32:47.627 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:32:47.627 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:47.888 [2024-07-22 20:39:59.652780] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.888 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:47.888 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:47.888 20:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.148 20:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:48.149 20:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:48.409 20:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.409 [2024-07-22 20:40:00.319332] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.409 20:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:48.669 20:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:32:48.669 20:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:48.669 20:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:32:48.669 20:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:50.051 Initializing NVMe Controllers 00:32:50.051 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:32:50.051 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:32:50.051 Initialization complete. Launching workers. 00:32:50.051 ======================================================== 00:32:50.051 Latency(us) 00:32:50.051 Device Information : IOPS MiB/s Average min max 00:32:50.051 PCIE (0000:65:00.0) NSID 1 from core 0: 74436.78 290.77 429.46 22.05 4762.02 00:32:50.051 ======================================================== 00:32:50.051 Total : 74436.78 290.77 429.46 22.05 4762.02 00:32:50.051 00:32:50.051 20:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:50.051 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.442 Initializing NVMe Controllers 00:32:51.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:51.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:51.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:51.442 Initialization complete. Launching workers. 00:32:51.442 ======================================================== 00:32:51.442 Latency(us) 00:32:51.442 Device Information : IOPS MiB/s Average min max 00:32:51.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 116.00 0.45 8628.56 208.99 46128.40 00:32:51.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16484.45 7943.27 47912.46 00:32:51.442 ======================================================== 00:32:51.442 Total : 177.00 0.69 11335.96 208.99 47912.46 00:32:51.442 00:32:51.442 20:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:51.442 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.824 Initializing NVMe Controllers 00:32:52.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:52.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:52.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:52.824 Initialization complete. Launching workers. 00:32:52.824 ======================================================== 00:32:52.824 Latency(us) 00:32:52.824 Device Information : IOPS MiB/s Average min max 00:32:52.824 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9236.72 36.08 3464.80 452.16 8971.50 00:32:52.824 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3757.63 14.68 8540.38 6793.39 16212.97 00:32:52.824 ======================================================== 00:32:52.824 Total : 12994.36 50.76 4932.53 452.16 16212.97 00:32:52.824 00:32:52.824 20:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:32:52.824 20:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:32:52.824 20:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:52.824 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.121 Initializing NVMe Controllers 00:32:56.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:56.121 Controller IO queue size 128, less than required. 00:32:56.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:56.121 Controller IO queue size 128, less than required. 00:32:56.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:56.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:56.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:56.121 Initialization complete. Launching workers. 00:32:56.121 ======================================================== 00:32:56.121 Latency(us) 00:32:56.121 Device Information : IOPS MiB/s Average min max 00:32:56.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 974.49 243.62 137341.06 77881.17 257667.07 00:32:56.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 543.00 135.75 251784.64 95788.17 390638.67 00:32:56.121 ======================================================== 00:32:56.121 Total : 1517.49 379.37 178291.88 77881.17 390638.67 00:32:56.121 00:32:56.121 20:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:32:56.121 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.121 No valid NVMe controllers or AIO or URING devices found 00:32:56.121 Initializing NVMe Controllers 00:32:56.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:56.121 Controller IO queue size 128, less than required. 00:32:56.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:56.121 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:56.121 Controller IO queue size 128, less than required. 00:32:56.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:56.122 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:32:56.122 WARNING: Some requested NVMe devices were skipped 00:32:56.122 20:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:32:56.122 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.664 Initializing NVMe Controllers 00:32:58.664 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:58.664 Controller IO queue size 128, less than required. 00:32:58.664 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:58.664 Controller IO queue size 128, less than required. 00:32:58.664 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:58.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:58.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:58.664 Initialization complete. Launching workers. 00:32:58.664 00:32:58.664 ==================== 00:32:58.664 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:58.664 TCP transport: 00:32:58.664 polls: 26860 00:32:58.664 idle_polls: 10221 00:32:58.664 sock_completions: 16639 00:32:58.664 nvme_completions: 4125 00:32:58.664 submitted_requests: 6288 00:32:58.664 queued_requests: 1 00:32:58.664 00:32:58.664 ==================== 00:32:58.664 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:58.664 TCP transport: 00:32:58.664 polls: 28900 00:32:58.664 idle_polls: 11077 00:32:58.664 sock_completions: 17823 00:32:58.664 nvme_completions: 4127 00:32:58.664 submitted_requests: 6216 00:32:58.664 queued_requests: 1 00:32:58.664 ======================================================== 00:32:58.664 Latency(us) 00:32:58.664 Device Information : IOPS MiB/s Average min max 00:32:58.664 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1030.90 257.73 130061.23 61339.49 284115.43 00:32:58.664 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1031.40 257.85 128744.91 65655.55 339003.38 00:32:58.664 ======================================================== 00:32:58.664 Total : 2062.31 515.58 129402.91 61339.49 339003.38 00:32:58.664 00:32:58.664 20:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:58.664 20:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:58.925 20:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:58.925 20:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:32:58.925 20:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:33:00.311 20:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=d3b833b5-90f8-4883-b5a8-2fa48acdc585 00:33:00.311 20:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb d3b833b5-90f8-4883-b5a8-2fa48acdc585 00:33:00.311 20:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=d3b833b5-90f8-4883-b5a8-2fa48acdc585 00:33:00.311 20:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:00.311 20:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:33:00.311 20:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:33:00.311 20:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:00.311 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:00.311 { 00:33:00.311 "uuid": "d3b833b5-90f8-4883-b5a8-2fa48acdc585", 00:33:00.311 "name": "lvs_0", 00:33:00.311 "base_bdev": "Nvme0n1", 00:33:00.311 "total_data_clusters": 457407, 00:33:00.311 "free_clusters": 457407, 00:33:00.311 "block_size": 512, 00:33:00.311 "cluster_size": 4194304 00:33:00.311 } 00:33:00.311 ]' 00:33:00.311 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d3b833b5-90f8-4883-b5a8-2fa48acdc585") .free_clusters' 00:33:00.311 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=457407 00:33:00.311 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d3b833b5-90f8-4883-b5a8-2fa48acdc585") .cluster_size' 00:33:00.311 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:00.311 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1829628 00:33:00.311 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1829628 00:33:00.311 1829628 00:33:00.311 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:33:00.311 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:33:00.311 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d3b833b5-90f8-4883-b5a8-2fa48acdc585 lbd_0 20480 00:33:00.572 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=8e7c1b0f-431b-4a52-9267-8a73fdbf8a47 00:33:00.572 20:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8e7c1b0f-431b-4a52-9267-8a73fdbf8a47 lvs_n_0 00:33:01.957 20:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=c4be9b72-b19d-4030-8586-571cb269be19 00:33:01.957 20:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb c4be9b72-b19d-4030-8586-571cb269be19 00:33:01.957 20:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=c4be9b72-b19d-4030-8586-571cb269be19 00:33:01.957 20:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:01.957 20:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:33:01.957 20:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:33:01.957 20:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:02.217 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:02.217 { 00:33:02.217 "uuid": "d3b833b5-90f8-4883-b5a8-2fa48acdc585", 00:33:02.217 "name": "lvs_0", 00:33:02.217 "base_bdev": "Nvme0n1", 00:33:02.217 "total_data_clusters": 457407, 00:33:02.218 "free_clusters": 452287, 00:33:02.218 "block_size": 512, 00:33:02.218 "cluster_size": 4194304 00:33:02.218 }, 00:33:02.218 { 00:33:02.218 "uuid": "c4be9b72-b19d-4030-8586-571cb269be19", 00:33:02.218 "name": "lvs_n_0", 00:33:02.218 "base_bdev": "8e7c1b0f-431b-4a52-9267-8a73fdbf8a47", 00:33:02.218 "total_data_clusters": 5114, 00:33:02.218 "free_clusters": 5114, 00:33:02.218 "block_size": 512, 00:33:02.218 "cluster_size": 4194304 00:33:02.218 } 00:33:02.218 ]' 00:33:02.218 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c4be9b72-b19d-4030-8586-571cb269be19") .free_clusters' 00:33:02.218 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:33:02.218 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c4be9b72-b19d-4030-8586-571cb269be19") .cluster_size' 00:33:02.218 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:02.218 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:33:02.218 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:33:02.218 20456 00:33:02.218 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:33:02.218 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c4be9b72-b19d-4030-8586-571cb269be19 lbd_nest_0 20456 00:33:02.478 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=4f34b9a1-3147-4142-a526-f5e2c7f534cf 00:33:02.478 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:02.775 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:33:02.775 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 4f34b9a1-3147-4142-a526-f5e2c7f534cf 00:33:02.775 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:03.035 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:33:03.035 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:33:03.035 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:03.035 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:03.035 20:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:03.035 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.268 Initializing NVMe Controllers 00:33:15.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:15.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:15.268 Initialization complete. Launching workers. 00:33:15.268 ======================================================== 00:33:15.268 Latency(us) 00:33:15.268 Device Information : IOPS MiB/s Average min max 00:33:15.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 43.19 0.02 23211.19 289.80 46123.11 00:33:15.268 ======================================================== 00:33:15.268 Total : 43.19 0.02 23211.19 289.80 46123.11 00:33:15.268 00:33:15.268 20:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:15.268 20:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:15.268 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.271 Initializing NVMe Controllers 00:33:25.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:25.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:25.271 Initialization complete. Launching workers. 00:33:25.271 ======================================================== 00:33:25.271 Latency(us) 00:33:25.271 Device Information : IOPS MiB/s Average min max 00:33:25.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 64.30 8.04 15559.90 5988.88 54875.87 00:33:25.271 ======================================================== 00:33:25.271 Total : 64.30 8.04 15559.90 5988.88 54875.87 00:33:25.271 00:33:25.271 20:40:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:25.271 20:40:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:25.271 20:40:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:25.271 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.271 Initializing NVMe Controllers 00:33:35.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:35.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:35.271 Initialization complete. Launching workers. 00:33:35.271 ======================================================== 00:33:35.271 Latency(us) 00:33:35.271 Device Information : IOPS MiB/s Average min max 00:33:35.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8230.50 4.02 3897.31 290.79 47892.37 00:33:35.271 ======================================================== 00:33:35.271 Total : 8230.50 4.02 3897.31 290.79 47892.37 00:33:35.271 00:33:35.271 20:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:35.271 20:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:35.271 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.273 Initializing NVMe Controllers 00:33:45.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:45.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:45.273 Initialization complete. Launching workers. 00:33:45.273 ======================================================== 00:33:45.273 Latency(us) 00:33:45.273 Device Information : IOPS MiB/s Average min max 00:33:45.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2063.00 257.87 15522.96 769.72 32527.14 00:33:45.273 ======================================================== 00:33:45.273 Total : 2063.00 257.87 15522.96 769.72 32527.14 00:33:45.273 00:33:45.273 20:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:45.273 20:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:45.273 20:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:45.273 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.336 Initializing NVMe Controllers 00:33:55.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:55.336 Controller IO queue size 128, less than required. 00:33:55.336 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:55.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:55.336 Initialization complete. Launching workers. 00:33:55.336 ======================================================== 00:33:55.336 Latency(us) 00:33:55.337 Device Information : IOPS MiB/s Average min max 00:33:55.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15742.19 7.69 8135.06 1948.48 18804.58 00:33:55.337 ======================================================== 00:33:55.337 Total : 15742.19 7.69 8135.06 1948.48 18804.58 00:33:55.337 00:33:55.600 20:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:55.600 20:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:55.600 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.832 Initializing NVMe Controllers 00:34:07.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:07.832 Controller IO queue size 128, less than required. 00:34:07.832 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:07.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:07.832 Initialization complete. Launching workers. 00:34:07.832 ======================================================== 00:34:07.832 Latency(us) 00:34:07.832 Device Information : IOPS MiB/s Average min max 00:34:07.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1153.60 144.20 111048.31 16430.05 234573.68 00:34:07.832 ======================================================== 00:34:07.832 Total : 1153.60 144.20 111048.31 16430.05 234573.68 00:34:07.832 00:34:07.832 20:41:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:07.832 20:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4f34b9a1-3147-4142-a526-f5e2c7f534cf 00:34:07.832 20:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:08.093 20:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8e7c1b0f-431b-4a52-9267-8a73fdbf8a47 00:34:08.093 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:08.354 rmmod nvme_tcp 00:34:08.354 rmmod nvme_fabrics 00:34:08.354 rmmod nvme_keyring 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3793778 ']' 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3793778 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3793778 ']' 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3793778 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3793778 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3793778' 00:34:08.354 killing process with pid 3793778 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3793778 00:34:08.354 20:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3793778 00:34:11.654 20:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:11.654 20:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:11.654 20:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:11.654 20:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:11.654 20:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:11.654 20:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.654 20:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:11.654 20:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.040 20:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:13.040 00:34:13.040 real 1m34.780s 00:34:13.040 user 5m35.790s 00:34:13.040 sys 0m14.238s 00:34:13.040 20:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:13.040 20:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:13.040 ************************************ 00:34:13.040 END TEST nvmf_perf 00:34:13.040 ************************************ 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.302 ************************************ 00:34:13.302 START TEST nvmf_fio_host 00:34:13.302 ************************************ 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:13.302 * Looking for test storage... 00:34:13.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.302 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:34:13.303 20:41:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:19.893 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:19.893 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:19.894 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:19.894 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:19.894 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.894 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.156 20:41:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:20.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:34:20.156 00:34:20.156 --- 10.0.0.2 ping statistics --- 00:34:20.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.156 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:34:20.156 00:34:20.156 --- 10.0.0.1 ping statistics --- 00:34:20.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.156 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:20.156 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3813893 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3813893 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3813893 ']' 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:20.417 20:41:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.417 [2024-07-22 20:41:32.284906] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:20.417 [2024-07-22 20:41:32.285008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.417 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.417 [2024-07-22 20:41:32.414549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:20.678 [2024-07-22 20:41:32.595659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.678 [2024-07-22 20:41:32.595706] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.678 [2024-07-22 20:41:32.595719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.678 [2024-07-22 20:41:32.595729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.678 [2024-07-22 20:41:32.595740] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.678 [2024-07-22 20:41:32.595843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.678 [2024-07-22 20:41:32.595926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:20.678 [2024-07-22 20:41:32.596058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.678 [2024-07-22 20:41:32.596082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:21.250 20:41:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:21.250 20:41:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:34:21.250 20:41:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:21.250 [2024-07-22 20:41:33.181219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.250 20:41:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:34:21.250 20:41:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:21.250 20:41:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.250 20:41:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:34:21.509 Malloc1 00:34:21.509 20:41:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:21.770 20:41:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:22.030 20:41:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.030 [2024-07-22 20:41:33.948165] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.031 20:41:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:22.292 20:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:22.553 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:22.553 fio-3.35 00:34:22.553 Starting 1 thread 00:34:22.814 EAL: No free 2048 kB hugepages reported on node 1 00:34:25.359 00:34:25.359 test: (groupid=0, jobs=1): err= 0: pid=3814731: Mon Jul 22 20:41:37 2024 00:34:25.359 read: IOPS=12.4k, BW=48.5MiB/s (50.9MB/s)(97.2MiB/2005msec) 00:34:25.359 slat (usec): min=2, max=242, avg= 2.47, stdev= 2.03 00:34:25.359 clat (usec): min=3282, max=10324, avg=5650.42, stdev=412.34 00:34:25.359 lat (usec): min=3319, max=10327, avg=5652.89, stdev=412.33 00:34:25.359 clat percentiles (usec): 00:34:25.359 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5342], 00:34:25.359 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5735], 00:34:25.359 | 70.00th=[ 5866], 80.00th=[ 5932], 90.00th=[ 6128], 95.00th=[ 6259], 00:34:25.359 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 8848], 99.95th=[ 9634], 00:34:25.359 | 99.99th=[10159] 00:34:25.359 bw ( KiB/s): min=48032, max=50496, per=99.95%, avg=49640.00, stdev=1098.42, samples=4 00:34:25.359 iops : min=12008, max=12624, avg=12410.00, stdev=274.61, samples=4 00:34:25.359 write: IOPS=12.4k, BW=48.4MiB/s (50.8MB/s)(97.1MiB/2005msec); 0 zone resets 00:34:25.359 slat (usec): min=2, max=196, avg= 2.56, stdev= 1.46 00:34:25.359 clat (usec): min=2515, max=9092, avg=4579.54, stdev=337.62 00:34:25.359 lat (usec): min=2541, max=9094, avg=4582.10, stdev=337.61 00:34:25.359 clat percentiles (usec): 00:34:25.359 | 1.00th=[ 3785], 5.00th=[ 4080], 10.00th=[ 4178], 20.00th=[ 4359], 00:34:25.359 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4686], 00:34:25.359 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5080], 00:34:25.359 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 6652], 99.95th=[ 8029], 00:34:25.359 | 99.99th=[ 8848] 00:34:25.359 bw ( KiB/s): min=48664, max=50240, per=100.00%, avg=49622.00, stdev=674.70, samples=4 00:34:25.359 iops : min=12166, max=12560, avg=12405.50, stdev=168.68, samples=4 00:34:25.359 lat (msec) : 4=1.66%, 10=98.32%, 20=0.01% 00:34:25.359 cpu : usr=70.81%, sys=25.35%, ctx=27, majf=0, minf=1527 00:34:25.359 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:25.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:25.359 issued rwts: total=24894,24864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:25.359 00:34:25.359 Run status group 0 (all jobs): 00:34:25.359 READ: bw=48.5MiB/s (50.9MB/s), 48.5MiB/s-48.5MiB/s (50.9MB/s-50.9MB/s), io=97.2MiB (102MB), run=2005-2005msec 00:34:25.359 WRITE: bw=48.4MiB/s (50.8MB/s), 48.4MiB/s-48.4MiB/s (50.8MB/s-50.8MB/s), io=97.1MiB (102MB), run=2005-2005msec 00:34:25.359 ----------------------------------------------------- 00:34:25.359 Suppressions used: 00:34:25.359 count bytes template 00:34:25.359 1 57 /usr/src/fio/parse.c 00:34:25.359 1 8 libtcmalloc_minimal.so 00:34:25.359 ----------------------------------------------------- 00:34:25.359 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:25.359 20:41:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:25.951 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:34:25.951 fio-3.35 00:34:25.951 Starting 1 thread 00:34:25.951 EAL: No free 2048 kB hugepages reported on node 1 00:34:28.528 00:34:28.528 test: (groupid=0, jobs=1): err= 0: pid=3815476: Mon Jul 22 20:41:40 2024 00:34:28.528 read: IOPS=8336, BW=130MiB/s (137MB/s)(261MiB/2004msec) 00:34:28.528 slat (usec): min=3, max=123, avg= 3.95, stdev= 1.62 00:34:28.528 clat (usec): min=1541, max=18301, avg=9404.91, stdev=2251.53 00:34:28.528 lat (usec): min=1545, max=18305, avg=9408.87, stdev=2251.65 00:34:28.528 clat percentiles (usec): 00:34:28.528 | 1.00th=[ 5211], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7439], 00:34:28.528 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9765], 00:34:28.528 | 70.00th=[10552], 80.00th=[11207], 90.00th=[12518], 95.00th=[13042], 00:34:28.528 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17695], 99.95th=[17957], 00:34:28.528 | 99.99th=[18220] 00:34:28.528 bw ( KiB/s): min=53440, max=79008, per=51.10%, avg=68168.00, stdev=11921.60, samples=4 00:34:28.528 iops : min= 3340, max= 4938, avg=4260.50, stdev=745.10, samples=4 00:34:28.528 write: IOPS=5027, BW=78.6MiB/s (82.4MB/s)(139MiB/1769msec); 0 zone resets 00:34:28.528 slat (usec): min=40, max=324, avg=41.99, stdev= 7.45 00:34:28.528 clat (usec): min=4408, max=17534, avg=10273.00, stdev=1642.92 00:34:28.528 lat (usec): min=4449, max=17575, avg=10315.00, stdev=1644.25 00:34:28.528 clat percentiles (usec): 00:34:28.528 | 1.00th=[ 6980], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 8848], 00:34:28.528 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10552], 00:34:28.528 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12518], 95.00th=[13304], 00:34:28.528 | 99.00th=[14746], 99.50th=[15139], 99.90th=[16319], 99.95th=[16581], 00:34:28.529 | 99.99th=[17433] 00:34:28.529 bw ( KiB/s): min=55808, max=81952, per=88.12%, avg=70888.00, stdev=12318.17, samples=4 00:34:28.529 iops : min= 3488, max= 5122, avg=4430.50, stdev=769.89, samples=4 00:34:28.529 lat (msec) : 2=0.03%, 4=0.09%, 10=57.42%, 20=42.47% 00:34:28.529 cpu : usr=83.98%, sys=13.37%, ctx=15, majf=0, minf=2220 00:34:28.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:34:28.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:28.529 issued rwts: total=16707,8894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:28.529 00:34:28.529 Run status group 0 (all jobs): 00:34:28.529 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=261MiB (274MB), run=2004-2004msec 00:34:28.529 WRITE: bw=78.6MiB/s (82.4MB/s), 78.6MiB/s-78.6MiB/s (82.4MB/s-82.4MB/s), io=139MiB (146MB), run=1769-1769msec 00:34:28.529 ----------------------------------------------------- 00:34:28.529 Suppressions used: 00:34:28.529 count bytes template 00:34:28.529 1 57 /usr/src/fio/parse.c 00:34:28.529 738 70848 /usr/src/fio/iolog.c 00:34:28.529 1 8 libtcmalloc_minimal.so 00:34:28.529 ----------------------------------------------------- 00:34:28.529 00:34:28.529 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:28.790 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:34:28.790 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:34:28.790 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:34:28.790 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:28.790 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:34:28.790 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:28.790 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:28.790 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:28.790 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:28.790 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:34:28.790 20:41:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:34:29.362 Nvme0n1 00:34:29.362 20:41:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:34:29.935 20:41:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=eb147866-f06c-465f-a132-a05613dbf81d 00:34:29.935 20:41:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb eb147866-f06c-465f-a132-a05613dbf81d 00:34:29.935 20:41:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=eb147866-f06c-465f-a132-a05613dbf81d 00:34:29.935 20:41:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:34:29.935 20:41:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:34:29.935 20:41:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:34:29.935 20:41:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:30.195 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:34:30.195 { 00:34:30.195 "uuid": "eb147866-f06c-465f-a132-a05613dbf81d", 00:34:30.195 "name": "lvs_0", 00:34:30.195 "base_bdev": "Nvme0n1", 00:34:30.195 "total_data_clusters": 1787, 00:34:30.195 "free_clusters": 1787, 00:34:30.195 "block_size": 512, 00:34:30.195 "cluster_size": 1073741824 00:34:30.195 } 00:34:30.195 ]' 00:34:30.195 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="eb147866-f06c-465f-a132-a05613dbf81d") .free_clusters' 00:34:30.195 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1787 00:34:30.195 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="eb147866-f06c-465f-a132-a05613dbf81d") .cluster_size' 00:34:30.195 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:34:30.195 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1829888 00:34:30.195 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1829888 00:34:30.195 1829888 00:34:30.195 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:34:30.455 71e2c298-a81d-4b5b-a4e2-15d8c3b90c24 00:34:30.455 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:34:30.455 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:34:30.716 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:30.977 20:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:31.238 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:31.238 fio-3.35 00:34:31.238 Starting 1 thread 00:34:31.238 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.784 00:34:33.784 test: (groupid=0, jobs=1): err= 0: pid=3816705: Mon Jul 22 20:41:45 2024 00:34:33.784 read: IOPS=9381, BW=36.6MiB/s (38.4MB/s)(73.5MiB/2005msec) 00:34:33.784 slat (usec): min=2, max=127, avg= 2.43, stdev= 1.25 00:34:33.784 clat (usec): min=2899, max=12448, avg=7532.16, stdev=575.22 00:34:33.784 lat (usec): min=2919, max=12450, avg=7534.59, stdev=575.15 00:34:33.784 clat percentiles (usec): 00:34:33.784 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:34:33.784 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7701], 00:34:33.784 | 70.00th=[ 7832], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8455], 00:34:33.784 | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[10683], 99.95th=[11600], 00:34:33.784 | 99.99th=[12387] 00:34:33.784 bw ( KiB/s): min=36312, max=38208, per=99.87%, avg=37476.00, stdev=814.34, samples=4 00:34:33.784 iops : min= 9078, max= 9552, avg=9369.00, stdev=203.58, samples=4 00:34:33.784 write: IOPS=9385, BW=36.7MiB/s (38.4MB/s)(73.5MiB/2005msec); 0 zone resets 00:34:33.784 slat (nsec): min=2346, max=105594, avg=2530.58, stdev=841.12 00:34:33.784 clat (usec): min=1431, max=11278, avg=6027.06, stdev=492.55 00:34:33.784 lat (usec): min=1440, max=11281, avg=6029.59, stdev=492.51 00:34:33.784 clat percentiles (usec): 00:34:33.784 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:34:33.784 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:34:33.784 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6783], 00:34:33.784 | 99.00th=[ 7111], 99.50th=[ 7242], 99.90th=[ 8848], 99.95th=[10028], 00:34:33.784 | 99.99th=[11207] 00:34:33.784 bw ( KiB/s): min=37256, max=37888, per=99.96%, avg=37524.00, stdev=293.03, samples=4 00:34:33.784 iops : min= 9314, max= 9472, avg=9381.00, stdev=73.26, samples=4 00:34:33.784 lat (msec) : 2=0.01%, 4=0.09%, 10=99.81%, 20=0.09% 00:34:33.784 cpu : usr=67.56%, sys=29.24%, ctx=51, majf=0, minf=1523 00:34:33.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:33.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.784 issued rwts: total=18810,18817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.784 00:34:33.784 Run status group 0 (all jobs): 00:34:33.784 READ: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=73.5MiB (77.0MB), run=2005-2005msec 00:34:33.784 WRITE: bw=36.7MiB/s (38.4MB/s), 36.7MiB/s-36.7MiB/s (38.4MB/s-38.4MB/s), io=73.5MiB (77.1MB), run=2005-2005msec 00:34:34.045 ----------------------------------------------------- 00:34:34.045 Suppressions used: 00:34:34.045 count bytes template 00:34:34.045 1 58 /usr/src/fio/parse.c 00:34:34.045 1 8 libtcmalloc_minimal.so 00:34:34.045 ----------------------------------------------------- 00:34:34.045 00:34:34.045 20:41:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:34.306 20:41:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:34:34.876 20:41:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=ae569e50-f0df-4d2f-867b-7a7dc77ea55d 00:34:34.876 20:41:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb ae569e50-f0df-4d2f-867b-7a7dc77ea55d 00:34:34.876 20:41:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=ae569e50-f0df-4d2f-867b-7a7dc77ea55d 00:34:34.876 20:41:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:34:34.876 20:41:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:34:34.876 20:41:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:34:34.876 20:41:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:35.136 20:41:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:34:35.136 { 00:34:35.136 "uuid": "eb147866-f06c-465f-a132-a05613dbf81d", 00:34:35.136 "name": "lvs_0", 00:34:35.136 "base_bdev": "Nvme0n1", 00:34:35.136 "total_data_clusters": 1787, 00:34:35.136 "free_clusters": 0, 00:34:35.136 "block_size": 512, 00:34:35.136 "cluster_size": 1073741824 00:34:35.136 }, 00:34:35.136 { 00:34:35.136 "uuid": "ae569e50-f0df-4d2f-867b-7a7dc77ea55d", 00:34:35.136 "name": "lvs_n_0", 00:34:35.136 "base_bdev": "71e2c298-a81d-4b5b-a4e2-15d8c3b90c24", 00:34:35.136 "total_data_clusters": 457025, 00:34:35.136 "free_clusters": 457025, 00:34:35.137 "block_size": 512, 00:34:35.137 "cluster_size": 4194304 00:34:35.137 } 00:34:35.137 ]' 00:34:35.137 20:41:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="ae569e50-f0df-4d2f-867b-7a7dc77ea55d") .free_clusters' 00:34:35.137 20:41:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=457025 00:34:35.137 20:41:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="ae569e50-f0df-4d2f-867b-7a7dc77ea55d") .cluster_size' 00:34:35.137 20:41:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:34:35.137 20:41:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1828100 00:34:35.137 20:41:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1828100 00:34:35.137 1828100 00:34:35.137 20:41:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:34:37.674 f490921b-fa8d-4dee-9247-c4d982ea4a1a 00:34:37.674 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:34:37.674 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:37.934 20:41:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:38.525 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:38.525 fio-3.35 00:34:38.525 Starting 1 thread 00:34:38.525 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.064 00:34:41.064 test: (groupid=0, jobs=1): err= 0: pid=3818213: Mon Jul 22 20:41:52 2024 00:34:41.064 read: IOPS=5737, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2009msec) 00:34:41.064 slat (usec): min=2, max=126, avg= 2.47, stdev= 1.66 00:34:41.064 clat (usec): min=4343, max=22345, avg=12348.26, stdev=1031.59 00:34:41.064 lat (usec): min=4364, max=22347, avg=12350.73, stdev=1031.47 00:34:41.064 clat percentiles (usec): 00:34:41.064 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[11600], 00:34:41.064 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:34:41.064 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13960], 00:34:41.064 | 99.00th=[14615], 99.50th=[14746], 99.90th=[18482], 99.95th=[19792], 00:34:41.064 | 99.99th=[22414] 00:34:41.064 bw ( KiB/s): min=21736, max=23520, per=99.92%, avg=22932.00, stdev=810.87, samples=4 00:34:41.064 iops : min= 5434, max= 5880, avg=5733.00, stdev=202.72, samples=4 00:34:41.064 write: IOPS=5728, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2009msec); 0 zone resets 00:34:41.064 slat (usec): min=2, max=122, avg= 2.58, stdev= 1.21 00:34:41.064 clat (usec): min=2070, max=18551, avg=9813.46, stdev=925.04 00:34:41.064 lat (usec): min=2081, max=18553, avg=9816.04, stdev=924.99 00:34:41.064 clat percentiles (usec): 00:34:41.064 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9110], 00:34:41.064 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:34:41.064 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:34:41.064 | 99.00th=[11731], 99.50th=[12125], 99.90th=[17433], 99.95th=[17433], 00:34:41.064 | 99.99th=[18482] 00:34:41.064 bw ( KiB/s): min=22784, max=23040, per=99.87%, avg=22884.00, stdev=118.57, samples=4 00:34:41.064 iops : min= 5696, max= 5760, avg=5721.00, stdev=29.64, samples=4 00:34:41.064 lat (msec) : 4=0.04%, 10=30.29%, 20=69.64%, 50=0.02% 00:34:41.064 cpu : usr=69.99%, sys=27.87%, ctx=59, majf=0, minf=1523 00:34:41.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:34:41.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:41.064 issued rwts: total=11527,11508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.064 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:41.064 00:34:41.064 Run status group 0 (all jobs): 00:34:41.064 READ: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.2MB), run=2009-2009msec 00:34:41.064 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.1MB), run=2009-2009msec 00:34:41.064 ----------------------------------------------------- 00:34:41.064 Suppressions used: 00:34:41.064 count bytes template 00:34:41.064 1 58 /usr/src/fio/parse.c 00:34:41.064 1 8 libtcmalloc_minimal.so 00:34:41.064 ----------------------------------------------------- 00:34:41.064 00:34:41.064 20:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:34:41.325 20:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:34:41.325 20:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:34:44.626 20:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:44.886 20:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:34:45.458 20:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:45.458 20:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:48.005 rmmod nvme_tcp 00:34:48.005 rmmod nvme_fabrics 00:34:48.005 rmmod nvme_keyring 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3813893 ']' 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3813893 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3813893 ']' 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3813893 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3813893 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3813893' 00:34:48.005 killing process with pid 3813893 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3813893 00:34:48.005 20:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3813893 00:34:48.577 20:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:48.577 20:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:48.577 20:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:48.577 20:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:48.577 20:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:48.577 20:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.577 20:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:48.577 20:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:51.126 00:34:51.126 real 0m37.486s 00:34:51.126 user 2m53.685s 00:34:51.126 sys 0m12.447s 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.126 ************************************ 00:34:51.126 END TEST nvmf_fio_host 00:34:51.126 ************************************ 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.126 ************************************ 00:34:51.126 START TEST nvmf_failover 00:34:51.126 ************************************ 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:51.126 * Looking for test storage... 00:34:51.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:51.126 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:51.127 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:51.127 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.127 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.127 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.127 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:51.127 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:51.127 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:34:51.127 20:42:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:57.881 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:57.881 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:57.881 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.881 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:57.882 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:57.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:57.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.722 ms 00:34:57.882 00:34:57.882 --- 10.0.0.2 ping statistics --- 00:34:57.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.882 rtt min/avg/max/mdev = 0.722/0.722/0.722/0.000 ms 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:57.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:57.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:34:57.882 00:34:57.882 --- 10.0.0.1 ping statistics --- 00:34:57.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.882 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3824335 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3824335 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3824335 ']' 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:57.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:57.882 20:42:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:57.882 [2024-07-22 20:42:09.867528] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:57.882 [2024-07-22 20:42:09.867630] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:58.142 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.142 [2024-07-22 20:42:10.005143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:58.403 [2024-07-22 20:42:10.193332] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:58.403 [2024-07-22 20:42:10.193380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:58.403 [2024-07-22 20:42:10.193393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:58.403 [2024-07-22 20:42:10.193403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:58.403 [2024-07-22 20:42:10.193414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:58.403 [2024-07-22 20:42:10.193557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:58.403 [2024-07-22 20:42:10.193674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.403 [2024-07-22 20:42:10.193700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:58.664 20:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:58.664 20:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:58.664 20:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:58.664 20:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:58.664 20:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:58.924 20:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:58.924 20:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:58.924 [2024-07-22 20:42:10.836536] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:58.924 20:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:59.184 Malloc0 00:34:59.184 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:59.446 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:59.446 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:59.707 [2024-07-22 20:42:11.572192] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.707 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:59.968 [2024-07-22 20:42:11.732599] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:59.968 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:59.968 [2024-07-22 20:42:11.893131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:59.968 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3825127 00:34:59.968 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:59.968 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:59.968 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3825127 /var/tmp/bdevperf.sock 00:34:59.968 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3825127 ']' 00:34:59.968 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:59.968 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:59.968 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:59.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:59.968 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:59.968 20:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:00.910 20:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:00.910 20:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:35:00.910 20:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:01.170 NVMe0n1 00:35:01.171 20:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:01.448 00:35:01.448 20:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3825363 00:35:01.448 20:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:01.448 20:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:35:02.838 20:42:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:02.838 [2024-07-22 20:42:14.585225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 [2024-07-22 20:42:14.585365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:35:02.838 20:42:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:35:06.131 20:42:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:06.131 00:35:06.131 20:42:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:06.131 [2024-07-22 20:42:18.111004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:35:06.131 [2024-07-22 20:42:18.111054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:35:06.131 [2024-07-22 20:42:18.111063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:35:06.131 [2024-07-22 20:42:18.111069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:35:06.131 [2024-07-22 20:42:18.111075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:35:06.131 [2024-07-22 20:42:18.111081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:35:06.131 [2024-07-22 20:42:18.111087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:35:06.131 20:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:35:09.424 20:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:09.424 [2024-07-22 20:42:21.285100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:09.424 20:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:35:10.361 20:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:10.622 20:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3825363 00:35:17.201 0 00:35:17.201 20:42:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3825127 00:35:17.201 20:42:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3825127 ']' 00:35:17.201 20:42:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3825127 00:35:17.201 20:42:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:35:17.201 20:42:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:17.201 20:42:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3825127 00:35:17.201 20:42:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:17.201 20:42:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:17.201 20:42:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3825127' 00:35:17.201 killing process with pid 3825127 00:35:17.201 20:42:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3825127 00:35:17.201 20:42:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3825127 00:35:17.471 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:17.471 [2024-07-22 20:42:11.985599] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:17.472 [2024-07-22 20:42:11.985724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3825127 ] 00:35:17.472 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.472 [2024-07-22 20:42:12.097189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.472 [2024-07-22 20:42:12.276142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.472 Running I/O for 15 seconds... 00:35:17.472 [2024-07-22 20:42:14.588504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.588984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.588995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.472 [2024-07-22 20:42:14.589353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.472 [2024-07-22 20:42:14.589364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.589682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.589978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.589990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.590002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.590014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.473 [2024-07-22 20:42:14.590024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.590037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.590047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.590059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.590069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.590081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.590092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.590108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.590118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.590130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.590141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.590154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.473 [2024-07-22 20:42:14.590163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.473 [2024-07-22 20:42:14.590176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-07-22 20:42:14.590186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-07-22 20:42:14.590213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.474 [2024-07-22 20:42:14.590235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.590276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86672 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.590288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.474 [2024-07-22 20:42:14.590371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.474 [2024-07-22 20:42:14.590393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.474 [2024-07-22 20:42:14.590415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.474 [2024-07-22 20:42:14.590436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:17.474 [2024-07-22 20:42:14.590635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.590647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.590658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86680 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.590670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.590691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.590701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86688 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.590711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.590729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.590738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86696 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.590749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.590766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.590775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86704 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.590785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.590803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.590811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86712 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.590821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.590838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.590850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86720 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.590860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.590878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.590886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86728 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.590896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.590914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.590923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86736 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.590933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.590950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.590959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86744 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.590969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.590979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.590986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.590995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86752 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.591005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.591015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.591023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.591031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86760 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.591041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.591052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.591060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.591069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86768 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.591079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.591088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.591096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.591105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86776 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.591116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.591127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.591135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.591144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86784 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.591153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.591164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.591172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.474 [2024-07-22 20:42:14.591181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86792 len:8 PRP1 0x0 PRP2 0x0 00:35:17.474 [2024-07-22 20:42:14.591191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.474 [2024-07-22 20:42:14.591207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.474 [2024-07-22 20:42:14.591216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86800 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86808 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86816 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86824 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86832 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86840 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86848 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86856 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86864 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86872 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86880 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86888 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86896 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86904 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86912 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86920 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86928 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86936 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86944 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86952 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.591963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86960 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.591972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.591981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.591992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.592000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86968 len:8 PRP1 0x0 PRP2 0x0 00:35:17.475 [2024-07-22 20:42:14.592011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.475 [2024-07-22 20:42:14.592020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.475 [2024-07-22 20:42:14.592027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.475 [2024-07-22 20:42:14.592036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86976 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.592047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.592057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.592064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.592073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86088 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.592082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.592093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.592101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.592109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86096 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.592119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.592128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.592136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.592145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86104 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.592155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.592165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.592172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.592180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86112 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.592191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.592205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.592213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.592221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86120 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.592231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.592243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.592251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.592260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86128 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.592270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.592281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.592289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.592298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86136 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.592308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.592318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.592325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.592333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86144 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.592343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.592353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.592361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.592370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86152 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.602838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.602883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.602894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.602906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86160 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.602918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.602928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.602936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.602947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86168 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.602959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.602968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.602976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.602985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86176 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.602996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.603006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.603013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.603022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86184 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.603032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.603042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.603050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.603058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86192 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.603072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.603083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.603091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.603100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86200 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.603110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.603119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.603127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.603136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86208 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.603146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.603156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.603163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.603172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86984 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.603182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.603193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.603208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.476 [2024-07-22 20:42:14.603217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86216 len:8 PRP1 0x0 PRP2 0x0 00:35:17.476 [2024-07-22 20:42:14.603227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.476 [2024-07-22 20:42:14.603237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.476 [2024-07-22 20:42:14.603246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86224 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86232 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86240 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86248 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86256 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86264 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86272 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86280 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86288 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86296 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86304 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86312 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86320 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86328 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86336 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86344 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86352 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86368 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.603968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86376 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.603978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.603988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.603996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.604005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86384 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.604015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.604025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.604033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.604042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86392 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.604052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.604061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.604069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.604078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86400 len:8 PRP1 0x0 PRP2 0x0 00:35:17.477 [2024-07-22 20:42:14.604088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.477 [2024-07-22 20:42:14.604098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.477 [2024-07-22 20:42:14.604105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.477 [2024-07-22 20:42:14.604114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86408 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86424 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86432 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86440 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86448 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86456 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86464 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86472 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86480 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86488 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86496 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86504 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86512 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86520 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86528 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86536 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86544 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86552 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86560 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86568 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86576 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86584 len:8 PRP1 0x0 PRP2 0x0 00:35:17.478 [2024-07-22 20:42:14.604944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-07-22 20:42:14.604954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.478 [2024-07-22 20:42:14.604961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.478 [2024-07-22 20:42:14.604969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86592 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.604979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.604989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.604997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.605006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85968 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.605016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.605026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.605033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.605043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85976 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.605053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.605063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.605070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.605079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85984 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.605090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.605100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.605108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.605117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85992 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.605126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.605138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.605146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.605155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86000 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.605165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.605175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.605183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.605192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86008 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.605207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.605217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.605225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.605233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86016 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.605243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.605253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.605261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.605270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86024 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.605280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.605290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.605297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.605306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86032 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86040 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86048 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86056 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86064 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86072 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86080 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86600 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86608 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86616 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86624 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86632 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86640 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86648 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.479 [2024-07-22 20:42:14.612884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.479 [2024-07-22 20:42:14.612893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86656 len:8 PRP1 0x0 PRP2 0x0 00:35:17.479 [2024-07-22 20:42:14.612903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.479 [2024-07-22 20:42:14.612914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.480 [2024-07-22 20:42:14.612921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.480 [2024-07-22 20:42:14.612930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86664 len:8 PRP1 0x0 PRP2 0x0 00:35:17.480 [2024-07-22 20:42:14.612941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:14.612951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.480 [2024-07-22 20:42:14.612959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.480 [2024-07-22 20:42:14.612968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86672 len:8 PRP1 0x0 PRP2 0x0 00:35:17.480 [2024-07-22 20:42:14.612979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:14.613188] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000389300 was disconnected and freed. reset controller. 00:35:17.480 [2024-07-22 20:42:14.613228] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:17.480 [2024-07-22 20:42:14.613243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.480 [2024-07-22 20:42:14.613320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:17.480 [2024-07-22 20:42:14.617120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.480 [2024-07-22 20:42:14.700524] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:17.480 [2024-07-22 20:42:18.111341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.480 [2024-07-22 20:42:18.111791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.111814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.111837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.111859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.111882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.111905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.111928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.111951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.111976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.111988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.111998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.112011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.112021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.112034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.112044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.112056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.112066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.112079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.112089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.112101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.112112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.112125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.112136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.112150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.112162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.112175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.480 [2024-07-22 20:42:18.112186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.480 [2024-07-22 20:42:18.112205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-07-22 20:42:18.112387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-07-22 20:42:18.112410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-07-22 20:42:18.112433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-07-22 20:42:18.112455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-07-22 20:42:18.112479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-07-22 20:42:18.112513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-07-22 20:42:18.112537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.481 [2024-07-22 20:42:18.112904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.481 [2024-07-22 20:42:18.112972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.481 [2024-07-22 20:42:18.112985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.112995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.482 [2024-07-22 20:42:18.113887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.482 [2024-07-22 20:42:18.113900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.113909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.113923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.113933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.113945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.113955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.113967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.113977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.113995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:18.114375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389800 is same with the state(5) to be set 00:35:17.483 [2024-07-22 20:42:18.114401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.483 [2024-07-22 20:42:18.114410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.483 [2024-07-22 20:42:18.114422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:131056 len:8 PRP1 0x0 PRP2 0x0 00:35:17.483 [2024-07-22 20:42:18.114434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114637] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000389800 was disconnected and freed. reset controller. 00:35:17.483 [2024-07-22 20:42:18.114653] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:35:17.483 [2024-07-22 20:42:18.114690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.483 [2024-07-22 20:42:18.114704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.483 [2024-07-22 20:42:18.114728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.483 [2024-07-22 20:42:18.114750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.483 [2024-07-22 20:42:18.114771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:18.114782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.483 [2024-07-22 20:42:18.118608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.483 [2024-07-22 20:42:18.118654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:17.483 [2024-07-22 20:42:18.163498] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:17.483 [2024-07-22 20:42:22.462577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:22.462637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:22.462665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:22.462677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:22.462696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:22.462707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:22.462720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:22.462730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:22.462743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:22.462753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:22.462766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:22.462776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:22.462789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:22.462800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:22.462813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.483 [2024-07-22 20:42:22.462823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:22.462837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-07-22 20:42:22.462849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:22.462862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.483 [2024-07-22 20:42:22.462873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.483 [2024-07-22 20:42:22.462886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.462898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.462911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.462921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.462934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.462944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.462957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.462967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.462981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.462998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-07-22 20:42:22.463259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.484 [2024-07-22 20:42:22.463280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.484 [2024-07-22 20:42:22.463793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.484 [2024-07-22 20:42:22.463804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.463816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.463826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.463839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.463850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.463862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.463872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.463885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.463898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.463910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.463920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.463933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.463943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.463978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.463988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.485 [2024-07-22 20:42:22.464567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.485 [2024-07-22 20:42:22.464626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.485 [2024-07-22 20:42:22.464637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.464977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.464989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.486 [2024-07-22 20:42:22.465342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.486 [2024-07-22 20:42:22.465352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.487 [2024-07-22 20:42:22.465625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:17.487 [2024-07-22 20:42:22.465666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:17.487 [2024-07-22 20:42:22.465677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94328 len:8 PRP1 0x0 PRP2 0x0 00:35:17.487 [2024-07-22 20:42:22.465691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465895] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500038a200 was disconnected and freed. reset controller. 00:35:17.487 [2024-07-22 20:42:22.465911] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:35:17.487 [2024-07-22 20:42:22.465941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.487 [2024-07-22 20:42:22.465955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.487 [2024-07-22 20:42:22.465978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.465989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.487 [2024-07-22 20:42:22.465999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.466011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.487 [2024-07-22 20:42:22.466021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.487 [2024-07-22 20:42:22.466031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.487 [2024-07-22 20:42:22.469848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.487 [2024-07-22 20:42:22.469893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:17.487 [2024-07-22 20:42:22.601364] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:17.487 00:35:17.487 Latency(us) 00:35:17.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.487 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:17.487 Verification LBA range: start 0x0 length 0x4000 00:35:17.487 NVMe0n1 : 15.01 10024.03 39.16 553.69 0.00 12070.77 593.92 31675.73 00:35:17.487 =================================================================================================================== 00:35:17.487 Total : 10024.03 39.16 553.69 0.00 12070.77 593.92 31675.73 00:35:17.487 Received shutdown signal, test time was about 15.000000 seconds 00:35:17.487 00:35:17.487 Latency(us) 00:35:17.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.487 =================================================================================================================== 00:35:17.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3828365 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3828365 /var/tmp/bdevperf.sock 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3828365 ']' 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:17.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:17.487 20:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:18.500 20:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:18.500 20:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:35:18.500 20:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:18.500 [2024-07-22 20:42:30.308750] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:18.500 20:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:18.500 [2024-07-22 20:42:30.477105] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:35:18.500 20:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:18.761 NVMe0n1 00:35:18.761 20:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:19.023 00:35:19.023 20:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:19.594 00:35:19.594 20:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:19.594 20:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:35:19.594 20:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:19.855 20:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:35:23.182 20:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:23.182 20:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:35:23.182 20:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:23.182 20:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3829478 00:35:23.182 20:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3829478 00:35:24.125 0 00:35:24.125 20:42:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:24.125 [2024-07-22 20:42:29.429220] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:24.125 [2024-07-22 20:42:29.429336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828365 ] 00:35:24.125 EAL: No free 2048 kB hugepages reported on node 1 00:35:24.125 [2024-07-22 20:42:29.540092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.125 [2024-07-22 20:42:29.718182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.125 [2024-07-22 20:42:31.677178] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:24.125 [2024-07-22 20:42:31.677257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:24.125 [2024-07-22 20:42:31.677276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.125 [2024-07-22 20:42:31.677292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:24.125 [2024-07-22 20:42:31.677303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.125 [2024-07-22 20:42:31.677314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:24.125 [2024-07-22 20:42:31.677325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.125 [2024-07-22 20:42:31.677336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:24.125 [2024-07-22 20:42:31.677346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.125 [2024-07-22 20:42:31.677357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.125 [2024-07-22 20:42:31.677411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.125 [2024-07-22 20:42:31.677438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:24.125 [2024-07-22 20:42:31.768980] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:24.125 Running I/O for 1 seconds... 00:35:24.125 00:35:24.125 Latency(us) 00:35:24.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.125 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:24.125 Verification LBA range: start 0x0 length 0x4000 00:35:24.125 NVMe0n1 : 1.01 10910.67 42.62 0.00 0.00 11672.31 2771.63 14745.60 00:35:24.125 =================================================================================================================== 00:35:24.125 Total : 10910.67 42.62 0.00 0.00 11672.31 2771.63 14745.60 00:35:24.125 20:42:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:24.125 20:42:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:35:24.386 20:42:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:24.386 20:42:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:24.386 20:42:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:35:24.647 20:42:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:24.647 20:42:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3828365 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3828365 ']' 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3828365 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3828365 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3828365' 00:35:27.950 killing process with pid 3828365 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3828365 00:35:27.950 20:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3828365 00:35:28.648 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:35:28.648 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:28.909 rmmod nvme_tcp 00:35:28.909 rmmod nvme_fabrics 00:35:28.909 rmmod nvme_keyring 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3824335 ']' 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3824335 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3824335 ']' 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3824335 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:28.909 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3824335 00:35:29.170 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:29.170 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:29.170 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3824335' 00:35:29.170 killing process with pid 3824335 00:35:29.170 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3824335 00:35:29.170 20:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3824335 00:35:29.741 20:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:29.741 20:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:29.741 20:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:29.741 20:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:29.741 20:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:29.741 20:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.741 20:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:29.741 20:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:32.286 00:35:32.286 real 0m41.047s 00:35:32.286 user 2m7.481s 00:35:32.286 sys 0m8.167s 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:32.286 ************************************ 00:35:32.286 END TEST nvmf_failover 00:35:32.286 ************************************ 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.286 ************************************ 00:35:32.286 START TEST nvmf_host_discovery 00:35:32.286 ************************************ 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:32.286 * Looking for test storage... 00:35:32.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:32.286 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:35:32.287 20:42:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:38.877 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:38.877 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:38.877 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:38.877 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:38.877 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:38.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:35:38.877 00:35:38.877 --- 10.0.0.2 ping statistics --- 00:35:38.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.878 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:38.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:35:38.878 00:35:38.878 --- 10.0.0.1 ping statistics --- 00:35:38.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.878 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3834771 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3834771 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3834771 ']' 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:38.878 20:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:39.138 [2024-07-22 20:42:50.918087] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:39.139 [2024-07-22 20:42:50.918211] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.139 EAL: No free 2048 kB hugepages reported on node 1 00:35:39.139 [2024-07-22 20:42:51.069385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.400 [2024-07-22 20:42:51.274243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.400 [2024-07-22 20:42:51.274313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.400 [2024-07-22 20:42:51.274327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.400 [2024-07-22 20:42:51.274338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.400 [2024-07-22 20:42:51.274351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.400 [2024-07-22 20:42:51.274400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.662 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:39.662 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:35:39.662 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:39.662 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:39.662 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:39.923 [2024-07-22 20:42:51.719482] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:39.923 [2024-07-22 20:42:51.727755] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:39.923 null0 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:39.923 null1 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3834909 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3834909 /tmp/host.sock 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3834909 ']' 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:39.923 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:39.923 20:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:39.923 [2024-07-22 20:42:51.850119] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:39.923 [2024-07-22 20:42:51.850252] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3834909 ] 00:35:39.923 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.184 [2024-07-22 20:42:51.966953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.184 [2024-07-22 20:42:52.142572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:40.755 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.756 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:35:40.756 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:35:40.756 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:40.756 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:40.756 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.756 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:40.756 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:40.756 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:40.756 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:41.016 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:41.017 [2024-07-22 20:42:52.922752] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:41.017 20:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.017 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:35:41.017 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:35:41.017 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:41.017 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:41.017 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:41.017 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:41.017 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:41.017 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:41.017 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:35:41.278 20:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:35:41.848 [2024-07-22 20:42:53.649501] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:41.848 [2024-07-22 20:42:53.649535] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:41.848 [2024-07-22 20:42:53.649563] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:41.848 [2024-07-22 20:42:53.777004] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:41.848 [2024-07-22 20:42:53.840239] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:41.848 [2024-07-22 20:42:53.840273] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.421 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:42.422 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:42.422 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:42.422 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:42.422 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:42.422 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:42.422 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:42.422 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:42.422 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.422 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:42.422 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:42.422 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:42.740 20:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:35:43.682 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:43.682 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:43.682 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:43.682 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:43.683 [2024-07-22 20:42:55.659032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:43.683 [2024-07-22 20:42:55.659569] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:43.683 [2024-07-22 20:42:55.659616] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:43.683 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:43.945 [2024-07-22 20:42:55.788032] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:35:43.945 20:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:35:43.945 [2024-07-22 20:42:55.846805] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:43.945 [2024-07-22 20:42:55.846836] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:43.945 [2024-07-22 20:42:55.846847] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:44.886 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:45.148 [2024-07-22 20:42:56.943378] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:45.148 [2024-07-22 20:42:56.943413] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:45.148 [2024-07-22 20:42:56.946272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.148 [2024-07-22 20:42:56.946303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.148 [2024-07-22 20:42:56.946318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.148 [2024-07-22 20:42:56.946329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.148 [2024-07-22 20:42:56.946340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.148 [2024-07-22 20:42:56.946357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.148 [2024-07-22 20:42:56.946374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.148 [2024-07-22 20:42:56.946390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.148 [2024-07-22 20:42:56.946406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:45.148 [2024-07-22 20:42:56.956282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.148 [2024-07-22 20:42:56.966323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:45.148 [2024-07-22 20:42:56.966576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.148 [2024-07-22 20:42:56.966604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:45.148 [2024-07-22 20:42:56.966618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:45.148 [2024-07-22 20:42:56.966638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:45.148 [2024-07-22 20:42:56.966665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:45.148 [2024-07-22 20:42:56.966676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:45.148 [2024-07-22 20:42:56.966693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:45.148 [2024-07-22 20:42:56.966712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.148 [2024-07-22 20:42:56.976407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:45.148 [2024-07-22 20:42:56.976804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.148 [2024-07-22 20:42:56.976825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:45.148 [2024-07-22 20:42:56.976836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:45.148 [2024-07-22 20:42:56.976853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:45.148 [2024-07-22 20:42:56.976876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:45.148 [2024-07-22 20:42:56.976886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:45.148 [2024-07-22 20:42:56.976896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:45.148 [2024-07-22 20:42:56.976911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.148 [2024-07-22 20:42:56.986483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:45.148 [2024-07-22 20:42:56.986876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.148 [2024-07-22 20:42:56.986900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:45.148 [2024-07-22 20:42:56.986912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:45.148 [2024-07-22 20:42:56.986929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:45.148 [2024-07-22 20:42:56.986962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:45.148 [2024-07-22 20:42:56.986973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:45.148 [2024-07-22 20:42:56.986983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:45.148 [2024-07-22 20:42:56.986999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.148 [2024-07-22 20:42:56.996557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:45.148 [2024-07-22 20:42:56.996962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.148 [2024-07-22 20:42:56.996983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:45.148 [2024-07-22 20:42:56.996994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:45.148 [2024-07-22 20:42:56.997011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:45.148 [2024-07-22 20:42:56.997034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:45.148 [2024-07-22 20:42:56.997044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:45.148 [2024-07-22 20:42:56.997053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:45.148 [2024-07-22 20:42:56.997068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.148 20:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:45.148 [2024-07-22 20:42:57.006628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.148 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:45.148 [2024-07-22 20:42:57.006886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.149 [2024-07-22 20:42:57.006908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:45.149 [2024-07-22 20:42:57.006924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:45.149 [2024-07-22 20:42:57.006943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:45.149 [2024-07-22 20:42:57.006958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:45.149 [2024-07-22 20:42:57.006969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:45.149 [2024-07-22 20:42:57.006979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:45.149 [2024-07-22 20:42:57.006994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:45.149 [2024-07-22 20:42:57.016707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:45.149 [2024-07-22 20:42:57.017170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.149 [2024-07-22 20:42:57.017209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:45.149 [2024-07-22 20:42:57.017230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:45.149 [2024-07-22 20:42:57.017256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:45.149 [2024-07-22 20:42:57.017279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:45.149 [2024-07-22 20:42:57.017302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:45.149 [2024-07-22 20:42:57.017312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:45.149 [2024-07-22 20:42:57.017329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.149 [2024-07-22 20:42:57.026793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:45.149 [2024-07-22 20:42:57.027042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.149 [2024-07-22 20:42:57.027063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:35:45.149 [2024-07-22 20:42:57.027075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388400 is same with the state(5) to be set 00:35:45.149 [2024-07-22 20:42:57.027092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:35:45.149 [2024-07-22 20:42:57.027106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:45.149 [2024-07-22 20:42:57.027115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:45.149 [2024-07-22 20:42:57.027124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:45.149 [2024-07-22 20:42:57.027139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.149 [2024-07-22 20:42:57.031416] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:35:45.149 [2024-07-22 20:42:57.031449] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:35:45.149 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.410 20:42:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:46.794 [2024-07-22 20:42:58.382409] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:46.794 [2024-07-22 20:42:58.382435] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:46.794 [2024-07-22 20:42:58.382461] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:46.794 [2024-07-22 20:42:58.469748] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:35:46.794 [2024-07-22 20:42:58.576999] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:46.794 [2024-07-22 20:42:58.577041] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:46.794 request: 00:35:46.794 { 00:35:46.794 "name": "nvme", 00:35:46.794 "trtype": "tcp", 00:35:46.794 "traddr": "10.0.0.2", 00:35:46.794 "adrfam": "ipv4", 00:35:46.794 "trsvcid": "8009", 00:35:46.794 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:46.794 "wait_for_attach": true, 00:35:46.794 "method": "bdev_nvme_start_discovery", 00:35:46.794 "req_id": 1 00:35:46.794 } 00:35:46.794 Got JSON-RPC error response 00:35:46.794 response: 00:35:46.794 { 00:35:46.794 "code": -17, 00:35:46.794 "message": "File exists" 00:35:46.794 } 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:46.794 request: 00:35:46.794 { 00:35:46.794 "name": "nvme_second", 00:35:46.794 "trtype": "tcp", 00:35:46.794 "traddr": "10.0.0.2", 00:35:46.794 "adrfam": "ipv4", 00:35:46.794 "trsvcid": "8009", 00:35:46.794 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:46.794 "wait_for_attach": true, 00:35:46.794 "method": "bdev_nvme_start_discovery", 00:35:46.794 "req_id": 1 00:35:46.794 } 00:35:46.794 Got JSON-RPC error response 00:35:46.794 response: 00:35:46.794 { 00:35:46.794 "code": -17, 00:35:46.794 "message": "File exists" 00:35:46.794 } 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:46.794 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:46.795 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:46.795 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.795 20:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:48.177 [2024-07-22 20:42:59.812584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.177 [2024-07-22 20:42:59.812623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000389d00 with addr=10.0.0.2, port=8010 00:35:48.177 [2024-07-22 20:42:59.812667] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:48.177 [2024-07-22 20:42:59.812678] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:48.177 [2024-07-22 20:42:59.812693] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:49.117 [2024-07-22 20:43:00.815059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.117 [2024-07-22 20:43:00.815091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000389f80 with addr=10.0.0.2, port=8010 00:35:49.117 [2024-07-22 20:43:00.815132] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:49.117 [2024-07-22 20:43:00.815142] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:49.117 [2024-07-22 20:43:00.815153] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:50.059 [2024-07-22 20:43:01.816929] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:35:50.059 request: 00:35:50.059 { 00:35:50.059 "name": "nvme_second", 00:35:50.059 "trtype": "tcp", 00:35:50.059 "traddr": "10.0.0.2", 00:35:50.059 "adrfam": "ipv4", 00:35:50.059 "trsvcid": "8010", 00:35:50.059 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:50.059 "wait_for_attach": false, 00:35:50.059 "attach_timeout_ms": 3000, 00:35:50.059 "method": "bdev_nvme_start_discovery", 00:35:50.059 "req_id": 1 00:35:50.059 } 00:35:50.059 Got JSON-RPC error response 00:35:50.059 response: 00:35:50.059 { 00:35:50.059 "code": -110, 00:35:50.059 "message": "Connection timed out" 00:35:50.059 } 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3834909 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:50.059 rmmod nvme_tcp 00:35:50.059 rmmod nvme_fabrics 00:35:50.059 rmmod nvme_keyring 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3834771 ']' 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3834771 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3834771 ']' 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3834771 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3834771 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3834771' 00:35:50.059 killing process with pid 3834771 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3834771 00:35:50.059 20:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3834771 00:35:50.630 20:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:50.630 20:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:50.630 20:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:50.630 20:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:50.630 20:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:50.630 20:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.630 20:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:50.630 20:43:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:53.173 00:35:53.173 real 0m20.860s 00:35:53.173 user 0m25.929s 00:35:53.173 sys 0m6.629s 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:53.173 ************************************ 00:35:53.173 END TEST nvmf_host_discovery 00:35:53.173 ************************************ 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.173 ************************************ 00:35:53.173 START TEST nvmf_host_multipath_status 00:35:53.173 ************************************ 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:53.173 * Looking for test storage... 00:35:53.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:35:53.173 20:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:59.763 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:59.763 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:59.763 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:59.763 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:59.763 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:59.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:59.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:35:59.764 00:35:59.764 --- 10.0.0.2 ping statistics --- 00:35:59.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.764 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:59.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:59.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:35:59.764 00:35:59.764 --- 10.0.0.1 ping statistics --- 00:35:59.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.764 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:59.764 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:00.025 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3841100 00:36:00.025 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3841100 00:36:00.025 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3841100 ']' 00:36:00.025 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:36:00.025 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.025 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:00.025 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.025 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:00.025 20:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:00.025 [2024-07-22 20:43:11.883061] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:00.025 [2024-07-22 20:43:11.883180] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:00.025 EAL: No free 2048 kB hugepages reported on node 1 00:36:00.025 [2024-07-22 20:43:12.025111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:00.286 [2024-07-22 20:43:12.206422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:00.286 [2024-07-22 20:43:12.206466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:00.286 [2024-07-22 20:43:12.206480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:00.286 [2024-07-22 20:43:12.206490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:00.286 [2024-07-22 20:43:12.206500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:00.286 [2024-07-22 20:43:12.206677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.286 [2024-07-22 20:43:12.206701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.857 20:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:00.858 20:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:36:00.858 20:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:00.858 20:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:00.858 20:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:00.858 20:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:00.858 20:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3841100 00:36:00.858 20:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:00.858 [2024-07-22 20:43:12.792226] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:00.858 20:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:36:01.118 Malloc0 00:36:01.118 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:36:01.381 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:01.381 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.644 [2024-07-22 20:43:13.471047] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.644 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:01.644 [2024-07-22 20:43:13.611347] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:01.644 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3841472 00:36:01.644 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:01.644 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:36:01.644 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3841472 /var/tmp/bdevperf.sock 00:36:01.644 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3841472 ']' 00:36:01.644 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:01.644 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:01.644 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:01.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:01.644 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:01.644 20:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:02.587 20:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:02.587 20:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:36:02.587 20:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:36:02.848 20:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:36:03.109 Nvme0n1 00:36:03.109 20:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:36:03.681 Nvme0n1 00:36:03.681 20:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:36:03.681 20:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:36:05.597 20:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:36:05.597 20:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:05.597 20:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:05.858 20:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:36:06.800 20:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:36:06.800 20:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:06.800 20:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:06.800 20:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:07.061 20:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:07.061 20:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:07.061 20:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.061 20:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:07.322 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:07.322 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:07.322 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.322 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:07.322 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:07.322 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:07.322 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.322 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:07.583 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:07.583 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:07.583 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.583 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:07.926 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:07.926 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:07.926 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.926 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:07.926 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:07.927 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:36:07.927 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:08.190 20:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:08.190 20:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.576 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:09.837 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:09.837 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:09.837 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:09.837 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.837 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:09.837 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:09.837 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.837 20:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:10.098 20:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:10.098 20:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:10.098 20:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:10.098 20:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:10.358 20:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:10.358 20:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:36:10.358 20:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:10.358 20:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:10.619 20:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:36:11.575 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:36:11.575 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:11.575 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:11.575 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:11.835 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:11.835 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:11.835 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:11.835 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:12.097 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:12.097 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:12.097 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:12.097 20:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:12.097 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:12.097 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:12.097 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:12.097 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:12.358 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:12.358 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:12.358 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:12.358 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:12.358 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:12.358 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:12.358 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:12.358 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:12.619 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:12.619 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:36:12.619 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:12.880 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:12.880 20:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:36:14.263 20:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:36:14.263 20:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:14.263 20:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:14.263 20:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:14.263 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:14.263 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:14.263 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:14.263 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:14.263 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:14.263 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:14.263 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:14.263 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:14.524 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:14.524 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:14.524 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:14.524 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:14.524 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:14.524 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:14.524 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:14.524 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:14.784 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:14.784 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:14.784 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:14.784 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:15.045 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:15.045 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:36:15.045 20:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:15.045 20:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:15.306 20:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:36:16.259 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:36:16.259 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:16.259 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:16.259 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:16.519 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:16.519 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:16.519 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:16.519 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:16.779 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:16.779 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:16.779 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:16.779 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:16.779 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:16.779 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:16.779 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:16.779 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:17.039 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:17.039 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:17.039 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:17.039 20:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:17.039 20:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:17.039 20:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:17.039 20:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:17.039 20:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:17.299 20:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:17.299 20:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:36:17.299 20:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:17.559 20:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:17.559 20:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:18.940 20:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:19.200 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:19.200 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:19.200 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:19.200 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:19.460 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:19.460 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:19.460 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:19.460 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:19.460 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:19.460 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:19.460 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:19.460 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:19.720 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:19.720 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:36:19.980 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:36:19.980 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:19.980 20:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:20.240 20:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:36:21.181 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:36:21.181 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:21.181 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:21.181 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:21.441 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:21.441 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:21.441 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:21.441 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:21.441 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:21.441 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:21.441 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:21.441 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:21.701 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:21.702 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:21.702 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:21.702 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:21.962 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:21.962 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:21.962 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:21.962 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:21.962 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:21.962 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:21.962 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:21.962 20:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:22.222 20:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:22.222 20:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:36:22.222 20:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:22.483 20:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:22.483 20:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:23.869 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:24.130 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:24.130 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:24.130 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:24.130 20:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:24.391 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:24.391 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:24.391 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:24.391 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:24.391 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:24.391 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:24.391 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:24.391 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:24.652 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:24.652 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:36:24.652 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:24.652 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:24.914 20:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:36:25.890 20:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:36:25.890 20:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:25.891 20:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:25.891 20:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:26.151 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:26.151 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:26.151 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:26.151 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:26.453 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:26.453 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:26.453 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:26.453 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:26.453 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:26.453 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:26.453 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:26.453 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:26.714 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:26.714 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:26.714 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:26.714 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:26.714 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:26.714 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:26.714 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:26.714 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:26.976 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:26.976 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:36:26.976 20:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:27.236 20:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:27.236 20:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:36:28.178 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:36:28.178 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:28.178 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:28.178 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:28.439 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:28.439 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:28.439 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:28.439 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:28.700 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:28.700 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:28.700 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:28.700 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:28.700 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:28.700 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:28.700 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:28.700 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:28.961 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:28.961 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:28.961 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:28.961 20:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:29.221 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:29.221 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:29.221 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:29.221 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:29.221 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:29.221 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3841472 00:36:29.221 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3841472 ']' 00:36:29.221 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3841472 00:36:29.221 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:36:29.221 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:29.221 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3841472 00:36:29.482 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:36:29.482 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:36:29.482 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3841472' 00:36:29.482 killing process with pid 3841472 00:36:29.482 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3841472 00:36:29.482 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3841472 00:36:29.742 Connection closed with partial response: 00:36:29.742 00:36:29.742 00:36:30.006 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3841472 00:36:30.006 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:30.006 [2024-07-22 20:43:13.710041] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:30.006 [2024-07-22 20:43:13.710159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841472 ] 00:36:30.006 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.006 [2024-07-22 20:43:13.808163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.006 [2024-07-22 20:43:13.943222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:30.006 Running I/O for 90 seconds... 00:36:30.006 [2024-07-22 20:43:26.997920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.006 [2024-07-22 20:43:26.997966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:30.006 [2024-07-22 20:43:26.998282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.006 [2024-07-22 20:43:26.998291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.998305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.998312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.998326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.998333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.998346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.998356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:26.999983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:26.999991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:27.000007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:27.000015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:27.000031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:27.000038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:27.000054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:27.000061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:30.007 [2024-07-22 20:43:27.000077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.007 [2024-07-22 20:43:27.000084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.000762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.000770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.001098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.001112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.001133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.001141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.001164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.001173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.001194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.001207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.001226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.001234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.001253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.001261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.001279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.001287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.001306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.001313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:30.008 [2024-07-22 20:43:27.001332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.008 [2024-07-22 20:43:27.001340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.001976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.001996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.009 [2024-07-22 20:43:27.002405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:30.009 [2024-07-22 20:43:27.002425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.010 [2024-07-22 20:43:27.002432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.164867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.164911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.164956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.164966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.164981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.010 [2024-07-22 20:43:39.164989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.010 [2024-07-22 20:43:39.165010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.010 [2024-07-22 20:43:39.165032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.010 [2024-07-22 20:43:39.165058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.165079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.165100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.165121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.010 [2024-07-22 20:43:39.165143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.010 [2024-07-22 20:43:39.165165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.165186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.165566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.165590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.165610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.010 [2024-07-22 20:43:39.165631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.010 [2024-07-22 20:43:39.165652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.165676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.165697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.165719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.165733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.165740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:30.010 [2024-07-22 20:43:39.166294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.010 [2024-07-22 20:43:39.166313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:30.010 Received shutdown signal, test time was about 25.704125 seconds 00:36:30.010 00:36:30.010 Latency(us) 00:36:30.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.010 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:30.010 Verification LBA range: start 0x0 length 0x4000 00:36:30.010 Nvme0n1 : 25.70 9985.10 39.00 0.00 0.00 12799.66 372.05 3019898.88 00:36:30.010 =================================================================================================================== 00:36:30.010 Total : 9985.10 39.00 0.00 0.00 12799.66 372.05 3019898.88 00:36:30.010 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:30.010 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:36:30.010 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:30.010 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:36:30.010 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:30.010 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:36:30.010 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:30.010 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:36:30.010 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:30.010 20:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:30.010 rmmod nvme_tcp 00:36:30.010 rmmod nvme_fabrics 00:36:30.010 rmmod nvme_keyring 00:36:30.010 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3841100 ']' 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3841100 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3841100 ']' 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3841100 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3841100 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3841100' 00:36:30.271 killing process with pid 3841100 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3841100 00:36:30.271 20:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3841100 00:36:31.213 20:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:31.213 20:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:31.213 20:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:31.213 20:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:31.213 20:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:31.213 20:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.214 20:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.214 20:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.136 20:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:33.136 00:36:33.136 real 0m40.334s 00:36:33.136 user 1m43.954s 00:36:33.136 sys 0m10.473s 00:36:33.136 20:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:33.136 20:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:33.136 ************************************ 00:36:33.136 END TEST nvmf_host_multipath_status 00:36:33.136 ************************************ 00:36:33.136 20:43:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:36:33.136 20:43:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:33.136 20:43:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:33.136 20:43:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:33.136 20:43:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.398 ************************************ 00:36:33.398 START TEST nvmf_discovery_remove_ifc 00:36:33.398 ************************************ 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:33.398 * Looking for test storage... 00:36:33.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:36:33.398 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:36:33.399 20:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:41.547 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:41.547 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:41.547 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:41.548 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:41.548 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:41.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:41.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:36:41.548 00:36:41.548 --- 10.0.0.2 ping statistics --- 00:36:41.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:41.548 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:41.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:41.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:36:41.548 00:36:41.548 --- 10.0.0.1 ping statistics --- 00:36:41.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:41.548 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3851322 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3851322 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3851322 ']' 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:41.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:41.548 20:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:41.548 [2024-07-22 20:43:52.491603] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:41.548 [2024-07-22 20:43:52.491726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:41.548 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.548 [2024-07-22 20:43:52.642393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.548 [2024-07-22 20:43:52.864939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:41.548 [2024-07-22 20:43:52.865014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:41.548 [2024-07-22 20:43:52.865029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:41.549 [2024-07-22 20:43:52.865038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:41.549 [2024-07-22 20:43:52.865050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:41.549 [2024-07-22 20:43:52.865095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:41.549 [2024-07-22 20:43:53.291223] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:41.549 [2024-07-22 20:43:53.299411] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:41.549 null0 00:36:41.549 [2024-07-22 20:43:53.331382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3851526 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3851526 /tmp/host.sock 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3851526 ']' 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:41.549 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:41.549 20:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:41.549 [2024-07-22 20:43:53.432175] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:41.549 [2024-07-22 20:43:53.432296] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851526 ] 00:36:41.549 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.549 [2024-07-22 20:43:53.541102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.809 [2024-07-22 20:43:53.718821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.379 20:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:43.765 [2024-07-22 20:43:55.453530] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:43.765 [2024-07-22 20:43:55.453565] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:43.765 [2024-07-22 20:43:55.453593] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:43.765 [2024-07-22 20:43:55.583017] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:36:43.765 [2024-07-22 20:43:55.766596] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:43.765 [2024-07-22 20:43:55.766660] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:43.765 [2024-07-22 20:43:55.766710] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:43.765 [2024-07-22 20:43:55.766737] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:43.765 [2024-07-22 20:43:55.766769] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:43.765 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.765 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:36:43.765 [2024-07-22 20:43:55.770865] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x615000388900 was disconnected and freed. delete nvme_qpair. 00:36:43.765 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:43.765 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:43.765 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:43.765 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.765 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:43.765 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:43.765 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.026 20:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:44.026 20:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:45.422 20:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:45.422 20:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:45.422 20:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:45.422 20:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:45.422 20:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.422 20:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:45.422 20:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:45.422 20:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.422 20:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:45.422 20:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:46.363 20:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:46.363 20:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:46.363 20:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:46.363 20:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.363 20:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:46.363 20:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:46.363 20:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:46.363 20:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.363 20:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:46.363 20:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:47.305 20:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:47.305 20:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:47.305 20:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:47.305 20:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:47.305 20:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.305 20:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:47.305 20:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:47.305 20:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.305 20:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:47.305 20:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:48.246 20:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:48.246 20:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:48.246 20:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:48.246 20:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.246 20:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:48.246 20:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:48.246 20:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:48.246 20:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.246 20:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:48.246 20:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:49.187 [2024-07-22 20:44:01.206794] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:36:49.187 [2024-07-22 20:44:01.206855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:49.187 [2024-07-22 20:44:01.206872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:49.187 [2024-07-22 20:44:01.206887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:49.187 [2024-07-22 20:44:01.206898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:49.187 [2024-07-22 20:44:01.206910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:49.187 [2024-07-22 20:44:01.206920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:49.187 [2024-07-22 20:44:01.206932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:49.187 [2024-07-22 20:44:01.206942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:49.187 [2024-07-22 20:44:01.206954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:49.187 [2024-07-22 20:44:01.206964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:49.187 [2024-07-22 20:44:01.206975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:36:49.447 [2024-07-22 20:44:01.216810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:36:49.447 [2024-07-22 20:44:01.226857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:49.447 20:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:49.447 20:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:49.447 20:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:49.447 20:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:49.448 20:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.448 20:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:49.448 20:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:50.390 [2024-07-22 20:44:02.257279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:36:50.390 [2024-07-22 20:44:02.257343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:36:50.390 [2024-07-22 20:44:02.257363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:36:50.390 [2024-07-22 20:44:02.257405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:36:50.390 [2024-07-22 20:44:02.257932] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:50.390 [2024-07-22 20:44:02.257959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:50.390 [2024-07-22 20:44:02.257971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:50.390 [2024-07-22 20:44:02.257984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:50.390 [2024-07-22 20:44:02.258013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:50.390 [2024-07-22 20:44:02.258026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:50.390 20:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.390 20:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:50.390 20:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:51.332 [2024-07-22 20:44:03.260422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:51.332 [2024-07-22 20:44:03.260449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:51.332 [2024-07-22 20:44:03.260459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:51.332 [2024-07-22 20:44:03.260470] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:36:51.332 [2024-07-22 20:44:03.260490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:51.332 [2024-07-22 20:44:03.260520] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:36:51.332 [2024-07-22 20:44:03.260558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:51.332 [2024-07-22 20:44:03.260575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:51.332 [2024-07-22 20:44:03.260591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:51.332 [2024-07-22 20:44:03.260602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:51.332 [2024-07-22 20:44:03.260614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:51.332 [2024-07-22 20:44:03.260628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:51.332 [2024-07-22 20:44:03.260640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:51.332 [2024-07-22 20:44:03.260651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:51.332 [2024-07-22 20:44:03.260663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:51.332 [2024-07-22 20:44:03.260674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:51.332 [2024-07-22 20:44:03.260685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:36:51.332 [2024-07-22 20:44:03.261079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388400 (9): Bad file descriptor 00:36:51.332 [2024-07-22 20:44:03.262098] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:36:51.332 [2024-07-22 20:44:03.262120] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:36:51.332 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:51.332 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:51.332 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.332 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:51.332 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:51.332 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:51.332 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:51.332 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.332 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:36:51.332 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:51.332 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:51.593 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:36:51.593 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:51.593 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:51.593 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:51.593 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:51.593 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.593 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:51.593 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:51.593 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.593 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:51.593 20:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:52.535 20:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:52.535 20:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:52.535 20:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:52.535 20:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.535 20:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:52.535 20:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:52.535 20:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:52.535 20:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.535 20:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:52.535 20:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:53.506 [2024-07-22 20:44:05.317435] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:53.506 [2024-07-22 20:44:05.317462] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:53.506 [2024-07-22 20:44:05.317487] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:53.506 [2024-07-22 20:44:05.404781] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:36:53.768 20:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:53.768 20:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:53.769 20:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:53.769 20:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.769 20:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:53.769 20:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:53.769 20:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:53.769 20:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.769 [2024-07-22 20:44:05.591188] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:53.769 [2024-07-22 20:44:05.591255] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:53.769 [2024-07-22 20:44:05.591302] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:53.769 [2024-07-22 20:44:05.591327] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:36:53.769 [2024-07-22 20:44:05.591341] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:53.769 20:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:53.769 20:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:53.769 [2024-07-22 20:44:05.636374] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x615000389300 was disconnected and freed. delete nvme_qpair. 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3851526 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3851526 ']' 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3851526 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3851526 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3851526' 00:36:54.713 killing process with pid 3851526 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3851526 00:36:54.713 20:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3851526 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:55.655 rmmod nvme_tcp 00:36:55.655 rmmod nvme_fabrics 00:36:55.655 rmmod nvme_keyring 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3851322 ']' 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3851322 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3851322 ']' 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3851322 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3851322 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3851322' 00:36:55.655 killing process with pid 3851322 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3851322 00:36:55.655 20:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3851322 00:36:56.226 20:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:56.226 20:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:56.226 20:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:56.226 20:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:56.226 20:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:56.226 20:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:56.226 20:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:56.226 20:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:58.771 00:36:58.771 real 0m25.052s 00:36:58.771 user 0m31.124s 00:36:58.771 sys 0m6.699s 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:58.771 ************************************ 00:36:58.771 END TEST nvmf_discovery_remove_ifc 00:36:58.771 ************************************ 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.771 ************************************ 00:36:58.771 START TEST nvmf_identify_kernel_target 00:36:58.771 ************************************ 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:58.771 * Looking for test storage... 00:36:58.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:58.771 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:36:58.772 20:44:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:05.364 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.364 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:05.365 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:05.365 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:05.365 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:05.365 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:05.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:05.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:37:05.626 00:37:05.626 --- 10.0.0.2 ping statistics --- 00:37:05.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.626 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:05.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:05.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:37:05.626 00:37:05.626 --- 10.0.0.1 ping statistics --- 00:37:05.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.626 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:05.626 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:37:05.627 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:37:05.627 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:37:05.887 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:05.887 20:44:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:09.190 Waiting for block devices as requested 00:37:09.190 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:09.190 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:09.190 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:09.190 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:09.451 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:09.451 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:09.451 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:09.711 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:09.711 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:09.971 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:09.971 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:09.971 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:09.971 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:10.233 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:10.233 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:10.233 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:10.233 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:10.806 No valid GPT data, bailing 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:10.806 00:37:10.806 Discovery Log Number of Records 2, Generation counter 2 00:37:10.806 =====Discovery Log Entry 0====== 00:37:10.806 trtype: tcp 00:37:10.806 adrfam: ipv4 00:37:10.806 subtype: current discovery subsystem 00:37:10.806 treq: not specified, sq flow control disable supported 00:37:10.806 portid: 1 00:37:10.806 trsvcid: 4420 00:37:10.806 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:10.806 traddr: 10.0.0.1 00:37:10.806 eflags: none 00:37:10.806 sectype: none 00:37:10.806 =====Discovery Log Entry 1====== 00:37:10.806 trtype: tcp 00:37:10.806 adrfam: ipv4 00:37:10.806 subtype: nvme subsystem 00:37:10.806 treq: not specified, sq flow control disable supported 00:37:10.806 portid: 1 00:37:10.806 trsvcid: 4420 00:37:10.806 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:10.806 traddr: 10.0.0.1 00:37:10.806 eflags: none 00:37:10.806 sectype: none 00:37:10.806 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:37:10.806 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:37:10.806 EAL: No free 2048 kB hugepages reported on node 1 00:37:10.806 ===================================================== 00:37:10.806 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:37:10.806 ===================================================== 00:37:10.806 Controller Capabilities/Features 00:37:10.806 ================================ 00:37:10.806 Vendor ID: 0000 00:37:10.806 Subsystem Vendor ID: 0000 00:37:10.806 Serial Number: ad9b052639337542fb32 00:37:10.806 Model Number: Linux 00:37:10.806 Firmware Version: 6.7.0-68 00:37:10.806 Recommended Arb Burst: 0 00:37:10.806 IEEE OUI Identifier: 00 00 00 00:37:10.806 Multi-path I/O 00:37:10.806 May have multiple subsystem ports: No 00:37:10.806 May have multiple controllers: No 00:37:10.806 Associated with SR-IOV VF: No 00:37:10.806 Max Data Transfer Size: Unlimited 00:37:10.806 Max Number of Namespaces: 0 00:37:10.806 Max Number of I/O Queues: 1024 00:37:10.806 NVMe Specification Version (VS): 1.3 00:37:10.806 NVMe Specification Version (Identify): 1.3 00:37:10.806 Maximum Queue Entries: 1024 00:37:10.806 Contiguous Queues Required: No 00:37:10.806 Arbitration Mechanisms Supported 00:37:10.806 Weighted Round Robin: Not Supported 00:37:10.806 Vendor Specific: Not Supported 00:37:10.806 Reset Timeout: 7500 ms 00:37:10.806 Doorbell Stride: 4 bytes 00:37:10.806 NVM Subsystem Reset: Not Supported 00:37:10.806 Command Sets Supported 00:37:10.806 NVM Command Set: Supported 00:37:10.806 Boot Partition: Not Supported 00:37:10.806 Memory Page Size Minimum: 4096 bytes 00:37:10.806 Memory Page Size Maximum: 4096 bytes 00:37:10.806 Persistent Memory Region: Not Supported 00:37:10.807 Optional Asynchronous Events Supported 00:37:10.807 Namespace Attribute Notices: Not Supported 00:37:10.807 Firmware Activation Notices: Not Supported 00:37:10.807 ANA Change Notices: Not Supported 00:37:10.807 PLE Aggregate Log Change Notices: Not Supported 00:37:10.807 LBA Status Info Alert Notices: Not Supported 00:37:10.807 EGE Aggregate Log Change Notices: Not Supported 00:37:10.807 Normal NVM Subsystem Shutdown event: Not Supported 00:37:10.807 Zone Descriptor Change Notices: Not Supported 00:37:10.807 Discovery Log Change Notices: Supported 00:37:10.807 Controller Attributes 00:37:10.807 128-bit Host Identifier: Not Supported 00:37:10.807 Non-Operational Permissive Mode: Not Supported 00:37:10.807 NVM Sets: Not Supported 00:37:10.807 Read Recovery Levels: Not Supported 00:37:10.807 Endurance Groups: Not Supported 00:37:10.807 Predictable Latency Mode: Not Supported 00:37:10.807 Traffic Based Keep ALive: Not Supported 00:37:10.807 Namespace Granularity: Not Supported 00:37:10.807 SQ Associations: Not Supported 00:37:10.807 UUID List: Not Supported 00:37:10.807 Multi-Domain Subsystem: Not Supported 00:37:10.807 Fixed Capacity Management: Not Supported 00:37:10.807 Variable Capacity Management: Not Supported 00:37:10.807 Delete Endurance Group: Not Supported 00:37:10.807 Delete NVM Set: Not Supported 00:37:10.807 Extended LBA Formats Supported: Not Supported 00:37:10.807 Flexible Data Placement Supported: Not Supported 00:37:10.807 00:37:10.807 Controller Memory Buffer Support 00:37:10.807 ================================ 00:37:10.807 Supported: No 00:37:10.807 00:37:10.807 Persistent Memory Region Support 00:37:10.807 ================================ 00:37:10.807 Supported: No 00:37:10.807 00:37:10.807 Admin Command Set Attributes 00:37:10.807 ============================ 00:37:10.807 Security Send/Receive: Not Supported 00:37:10.807 Format NVM: Not Supported 00:37:10.807 Firmware Activate/Download: Not Supported 00:37:10.807 Namespace Management: Not Supported 00:37:10.807 Device Self-Test: Not Supported 00:37:10.807 Directives: Not Supported 00:37:10.807 NVMe-MI: Not Supported 00:37:10.807 Virtualization Management: Not Supported 00:37:10.807 Doorbell Buffer Config: Not Supported 00:37:10.807 Get LBA Status Capability: Not Supported 00:37:10.807 Command & Feature Lockdown Capability: Not Supported 00:37:10.807 Abort Command Limit: 1 00:37:10.807 Async Event Request Limit: 1 00:37:10.807 Number of Firmware Slots: N/A 00:37:10.807 Firmware Slot 1 Read-Only: N/A 00:37:11.069 Firmware Activation Without Reset: N/A 00:37:11.069 Multiple Update Detection Support: N/A 00:37:11.069 Firmware Update Granularity: No Information Provided 00:37:11.069 Per-Namespace SMART Log: No 00:37:11.069 Asymmetric Namespace Access Log Page: Not Supported 00:37:11.069 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:37:11.069 Command Effects Log Page: Not Supported 00:37:11.069 Get Log Page Extended Data: Supported 00:37:11.069 Telemetry Log Pages: Not Supported 00:37:11.069 Persistent Event Log Pages: Not Supported 00:37:11.069 Supported Log Pages Log Page: May Support 00:37:11.069 Commands Supported & Effects Log Page: Not Supported 00:37:11.069 Feature Identifiers & Effects Log Page:May Support 00:37:11.069 NVMe-MI Commands & Effects Log Page: May Support 00:37:11.069 Data Area 4 for Telemetry Log: Not Supported 00:37:11.069 Error Log Page Entries Supported: 1 00:37:11.069 Keep Alive: Not Supported 00:37:11.069 00:37:11.069 NVM Command Set Attributes 00:37:11.069 ========================== 00:37:11.069 Submission Queue Entry Size 00:37:11.069 Max: 1 00:37:11.069 Min: 1 00:37:11.069 Completion Queue Entry Size 00:37:11.069 Max: 1 00:37:11.069 Min: 1 00:37:11.069 Number of Namespaces: 0 00:37:11.069 Compare Command: Not Supported 00:37:11.069 Write Uncorrectable Command: Not Supported 00:37:11.069 Dataset Management Command: Not Supported 00:37:11.069 Write Zeroes Command: Not Supported 00:37:11.069 Set Features Save Field: Not Supported 00:37:11.069 Reservations: Not Supported 00:37:11.069 Timestamp: Not Supported 00:37:11.069 Copy: Not Supported 00:37:11.069 Volatile Write Cache: Not Present 00:37:11.069 Atomic Write Unit (Normal): 1 00:37:11.069 Atomic Write Unit (PFail): 1 00:37:11.069 Atomic Compare & Write Unit: 1 00:37:11.069 Fused Compare & Write: Not Supported 00:37:11.069 Scatter-Gather List 00:37:11.069 SGL Command Set: Supported 00:37:11.069 SGL Keyed: Not Supported 00:37:11.069 SGL Bit Bucket Descriptor: Not Supported 00:37:11.069 SGL Metadata Pointer: Not Supported 00:37:11.069 Oversized SGL: Not Supported 00:37:11.069 SGL Metadata Address: Not Supported 00:37:11.069 SGL Offset: Supported 00:37:11.069 Transport SGL Data Block: Not Supported 00:37:11.069 Replay Protected Memory Block: Not Supported 00:37:11.069 00:37:11.069 Firmware Slot Information 00:37:11.069 ========================= 00:37:11.069 Active slot: 0 00:37:11.069 00:37:11.069 00:37:11.069 Error Log 00:37:11.069 ========= 00:37:11.069 00:37:11.069 Active Namespaces 00:37:11.069 ================= 00:37:11.069 Discovery Log Page 00:37:11.069 ================== 00:37:11.069 Generation Counter: 2 00:37:11.069 Number of Records: 2 00:37:11.069 Record Format: 0 00:37:11.069 00:37:11.069 Discovery Log Entry 0 00:37:11.069 ---------------------- 00:37:11.069 Transport Type: 3 (TCP) 00:37:11.069 Address Family: 1 (IPv4) 00:37:11.069 Subsystem Type: 3 (Current Discovery Subsystem) 00:37:11.069 Entry Flags: 00:37:11.069 Duplicate Returned Information: 0 00:37:11.070 Explicit Persistent Connection Support for Discovery: 0 00:37:11.070 Transport Requirements: 00:37:11.070 Secure Channel: Not Specified 00:37:11.070 Port ID: 1 (0x0001) 00:37:11.070 Controller ID: 65535 (0xffff) 00:37:11.070 Admin Max SQ Size: 32 00:37:11.070 Transport Service Identifier: 4420 00:37:11.070 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:37:11.070 Transport Address: 10.0.0.1 00:37:11.070 Discovery Log Entry 1 00:37:11.070 ---------------------- 00:37:11.070 Transport Type: 3 (TCP) 00:37:11.070 Address Family: 1 (IPv4) 00:37:11.070 Subsystem Type: 2 (NVM Subsystem) 00:37:11.070 Entry Flags: 00:37:11.070 Duplicate Returned Information: 0 00:37:11.070 Explicit Persistent Connection Support for Discovery: 0 00:37:11.070 Transport Requirements: 00:37:11.070 Secure Channel: Not Specified 00:37:11.070 Port ID: 1 (0x0001) 00:37:11.070 Controller ID: 65535 (0xffff) 00:37:11.070 Admin Max SQ Size: 32 00:37:11.070 Transport Service Identifier: 4420 00:37:11.070 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:37:11.070 Transport Address: 10.0.0.1 00:37:11.070 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:11.070 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.070 get_feature(0x01) failed 00:37:11.070 get_feature(0x02) failed 00:37:11.070 get_feature(0x04) failed 00:37:11.070 ===================================================== 00:37:11.070 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:11.070 ===================================================== 00:37:11.070 Controller Capabilities/Features 00:37:11.070 ================================ 00:37:11.070 Vendor ID: 0000 00:37:11.070 Subsystem Vendor ID: 0000 00:37:11.070 Serial Number: 67cbb00f312a62e86e97 00:37:11.070 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:37:11.070 Firmware Version: 6.7.0-68 00:37:11.070 Recommended Arb Burst: 6 00:37:11.070 IEEE OUI Identifier: 00 00 00 00:37:11.070 Multi-path I/O 00:37:11.070 May have multiple subsystem ports: Yes 00:37:11.070 May have multiple controllers: Yes 00:37:11.070 Associated with SR-IOV VF: No 00:37:11.070 Max Data Transfer Size: Unlimited 00:37:11.070 Max Number of Namespaces: 1024 00:37:11.070 Max Number of I/O Queues: 128 00:37:11.070 NVMe Specification Version (VS): 1.3 00:37:11.070 NVMe Specification Version (Identify): 1.3 00:37:11.070 Maximum Queue Entries: 1024 00:37:11.070 Contiguous Queues Required: No 00:37:11.070 Arbitration Mechanisms Supported 00:37:11.070 Weighted Round Robin: Not Supported 00:37:11.070 Vendor Specific: Not Supported 00:37:11.070 Reset Timeout: 7500 ms 00:37:11.070 Doorbell Stride: 4 bytes 00:37:11.070 NVM Subsystem Reset: Not Supported 00:37:11.070 Command Sets Supported 00:37:11.070 NVM Command Set: Supported 00:37:11.070 Boot Partition: Not Supported 00:37:11.070 Memory Page Size Minimum: 4096 bytes 00:37:11.070 Memory Page Size Maximum: 4096 bytes 00:37:11.070 Persistent Memory Region: Not Supported 00:37:11.070 Optional Asynchronous Events Supported 00:37:11.070 Namespace Attribute Notices: Supported 00:37:11.070 Firmware Activation Notices: Not Supported 00:37:11.070 ANA Change Notices: Supported 00:37:11.070 PLE Aggregate Log Change Notices: Not Supported 00:37:11.070 LBA Status Info Alert Notices: Not Supported 00:37:11.070 EGE Aggregate Log Change Notices: Not Supported 00:37:11.070 Normal NVM Subsystem Shutdown event: Not Supported 00:37:11.070 Zone Descriptor Change Notices: Not Supported 00:37:11.070 Discovery Log Change Notices: Not Supported 00:37:11.070 Controller Attributes 00:37:11.070 128-bit Host Identifier: Supported 00:37:11.070 Non-Operational Permissive Mode: Not Supported 00:37:11.070 NVM Sets: Not Supported 00:37:11.070 Read Recovery Levels: Not Supported 00:37:11.070 Endurance Groups: Not Supported 00:37:11.070 Predictable Latency Mode: Not Supported 00:37:11.070 Traffic Based Keep ALive: Supported 00:37:11.070 Namespace Granularity: Not Supported 00:37:11.070 SQ Associations: Not Supported 00:37:11.070 UUID List: Not Supported 00:37:11.070 Multi-Domain Subsystem: Not Supported 00:37:11.070 Fixed Capacity Management: Not Supported 00:37:11.070 Variable Capacity Management: Not Supported 00:37:11.070 Delete Endurance Group: Not Supported 00:37:11.070 Delete NVM Set: Not Supported 00:37:11.070 Extended LBA Formats Supported: Not Supported 00:37:11.070 Flexible Data Placement Supported: Not Supported 00:37:11.070 00:37:11.070 Controller Memory Buffer Support 00:37:11.070 ================================ 00:37:11.070 Supported: No 00:37:11.070 00:37:11.070 Persistent Memory Region Support 00:37:11.070 ================================ 00:37:11.070 Supported: No 00:37:11.070 00:37:11.070 Admin Command Set Attributes 00:37:11.070 ============================ 00:37:11.070 Security Send/Receive: Not Supported 00:37:11.070 Format NVM: Not Supported 00:37:11.070 Firmware Activate/Download: Not Supported 00:37:11.070 Namespace Management: Not Supported 00:37:11.070 Device Self-Test: Not Supported 00:37:11.070 Directives: Not Supported 00:37:11.070 NVMe-MI: Not Supported 00:37:11.070 Virtualization Management: Not Supported 00:37:11.070 Doorbell Buffer Config: Not Supported 00:37:11.070 Get LBA Status Capability: Not Supported 00:37:11.070 Command & Feature Lockdown Capability: Not Supported 00:37:11.070 Abort Command Limit: 4 00:37:11.070 Async Event Request Limit: 4 00:37:11.070 Number of Firmware Slots: N/A 00:37:11.070 Firmware Slot 1 Read-Only: N/A 00:37:11.070 Firmware Activation Without Reset: N/A 00:37:11.070 Multiple Update Detection Support: N/A 00:37:11.070 Firmware Update Granularity: No Information Provided 00:37:11.070 Per-Namespace SMART Log: Yes 00:37:11.070 Asymmetric Namespace Access Log Page: Supported 00:37:11.070 ANA Transition Time : 10 sec 00:37:11.070 00:37:11.070 Asymmetric Namespace Access Capabilities 00:37:11.070 ANA Optimized State : Supported 00:37:11.070 ANA Non-Optimized State : Supported 00:37:11.070 ANA Inaccessible State : Supported 00:37:11.070 ANA Persistent Loss State : Supported 00:37:11.070 ANA Change State : Supported 00:37:11.070 ANAGRPID is not changed : No 00:37:11.070 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:37:11.070 00:37:11.070 ANA Group Identifier Maximum : 128 00:37:11.070 Number of ANA Group Identifiers : 128 00:37:11.070 Max Number of Allowed Namespaces : 1024 00:37:11.070 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:37:11.070 Command Effects Log Page: Supported 00:37:11.070 Get Log Page Extended Data: Supported 00:37:11.070 Telemetry Log Pages: Not Supported 00:37:11.070 Persistent Event Log Pages: Not Supported 00:37:11.070 Supported Log Pages Log Page: May Support 00:37:11.070 Commands Supported & Effects Log Page: Not Supported 00:37:11.070 Feature Identifiers & Effects Log Page:May Support 00:37:11.070 NVMe-MI Commands & Effects Log Page: May Support 00:37:11.070 Data Area 4 for Telemetry Log: Not Supported 00:37:11.070 Error Log Page Entries Supported: 128 00:37:11.070 Keep Alive: Supported 00:37:11.070 Keep Alive Granularity: 1000 ms 00:37:11.070 00:37:11.070 NVM Command Set Attributes 00:37:11.070 ========================== 00:37:11.070 Submission Queue Entry Size 00:37:11.070 Max: 64 00:37:11.070 Min: 64 00:37:11.070 Completion Queue Entry Size 00:37:11.070 Max: 16 00:37:11.070 Min: 16 00:37:11.070 Number of Namespaces: 1024 00:37:11.070 Compare Command: Not Supported 00:37:11.070 Write Uncorrectable Command: Not Supported 00:37:11.070 Dataset Management Command: Supported 00:37:11.070 Write Zeroes Command: Supported 00:37:11.070 Set Features Save Field: Not Supported 00:37:11.070 Reservations: Not Supported 00:37:11.070 Timestamp: Not Supported 00:37:11.070 Copy: Not Supported 00:37:11.070 Volatile Write Cache: Present 00:37:11.070 Atomic Write Unit (Normal): 1 00:37:11.070 Atomic Write Unit (PFail): 1 00:37:11.070 Atomic Compare & Write Unit: 1 00:37:11.070 Fused Compare & Write: Not Supported 00:37:11.070 Scatter-Gather List 00:37:11.070 SGL Command Set: Supported 00:37:11.070 SGL Keyed: Not Supported 00:37:11.070 SGL Bit Bucket Descriptor: Not Supported 00:37:11.070 SGL Metadata Pointer: Not Supported 00:37:11.070 Oversized SGL: Not Supported 00:37:11.070 SGL Metadata Address: Not Supported 00:37:11.070 SGL Offset: Supported 00:37:11.070 Transport SGL Data Block: Not Supported 00:37:11.070 Replay Protected Memory Block: Not Supported 00:37:11.070 00:37:11.070 Firmware Slot Information 00:37:11.070 ========================= 00:37:11.070 Active slot: 0 00:37:11.070 00:37:11.070 Asymmetric Namespace Access 00:37:11.071 =========================== 00:37:11.071 Change Count : 0 00:37:11.071 Number of ANA Group Descriptors : 1 00:37:11.071 ANA Group Descriptor : 0 00:37:11.071 ANA Group ID : 1 00:37:11.071 Number of NSID Values : 1 00:37:11.071 Change Count : 0 00:37:11.071 ANA State : 1 00:37:11.071 Namespace Identifier : 1 00:37:11.071 00:37:11.071 Commands Supported and Effects 00:37:11.071 ============================== 00:37:11.071 Admin Commands 00:37:11.071 -------------- 00:37:11.071 Get Log Page (02h): Supported 00:37:11.071 Identify (06h): Supported 00:37:11.071 Abort (08h): Supported 00:37:11.071 Set Features (09h): Supported 00:37:11.071 Get Features (0Ah): Supported 00:37:11.071 Asynchronous Event Request (0Ch): Supported 00:37:11.071 Keep Alive (18h): Supported 00:37:11.071 I/O Commands 00:37:11.071 ------------ 00:37:11.071 Flush (00h): Supported 00:37:11.071 Write (01h): Supported LBA-Change 00:37:11.071 Read (02h): Supported 00:37:11.071 Write Zeroes (08h): Supported LBA-Change 00:37:11.071 Dataset Management (09h): Supported 00:37:11.071 00:37:11.071 Error Log 00:37:11.071 ========= 00:37:11.071 Entry: 0 00:37:11.071 Error Count: 0x3 00:37:11.071 Submission Queue Id: 0x0 00:37:11.071 Command Id: 0x5 00:37:11.071 Phase Bit: 0 00:37:11.071 Status Code: 0x2 00:37:11.071 Status Code Type: 0x0 00:37:11.071 Do Not Retry: 1 00:37:11.071 Error Location: 0x28 00:37:11.071 LBA: 0x0 00:37:11.071 Namespace: 0x0 00:37:11.071 Vendor Log Page: 0x0 00:37:11.071 ----------- 00:37:11.071 Entry: 1 00:37:11.071 Error Count: 0x2 00:37:11.071 Submission Queue Id: 0x0 00:37:11.071 Command Id: 0x5 00:37:11.071 Phase Bit: 0 00:37:11.071 Status Code: 0x2 00:37:11.071 Status Code Type: 0x0 00:37:11.071 Do Not Retry: 1 00:37:11.071 Error Location: 0x28 00:37:11.071 LBA: 0x0 00:37:11.071 Namespace: 0x0 00:37:11.071 Vendor Log Page: 0x0 00:37:11.071 ----------- 00:37:11.071 Entry: 2 00:37:11.071 Error Count: 0x1 00:37:11.071 Submission Queue Id: 0x0 00:37:11.071 Command Id: 0x4 00:37:11.071 Phase Bit: 0 00:37:11.071 Status Code: 0x2 00:37:11.071 Status Code Type: 0x0 00:37:11.071 Do Not Retry: 1 00:37:11.071 Error Location: 0x28 00:37:11.071 LBA: 0x0 00:37:11.071 Namespace: 0x0 00:37:11.071 Vendor Log Page: 0x0 00:37:11.071 00:37:11.071 Number of Queues 00:37:11.071 ================ 00:37:11.071 Number of I/O Submission Queues: 128 00:37:11.071 Number of I/O Completion Queues: 128 00:37:11.071 00:37:11.071 ZNS Specific Controller Data 00:37:11.071 ============================ 00:37:11.071 Zone Append Size Limit: 0 00:37:11.071 00:37:11.071 00:37:11.071 Active Namespaces 00:37:11.071 ================= 00:37:11.071 get_feature(0x05) failed 00:37:11.071 Namespace ID:1 00:37:11.071 Command Set Identifier: NVM (00h) 00:37:11.071 Deallocate: Supported 00:37:11.071 Deallocated/Unwritten Error: Not Supported 00:37:11.071 Deallocated Read Value: Unknown 00:37:11.071 Deallocate in Write Zeroes: Not Supported 00:37:11.071 Deallocated Guard Field: 0xFFFF 00:37:11.071 Flush: Supported 00:37:11.071 Reservation: Not Supported 00:37:11.071 Namespace Sharing Capabilities: Multiple Controllers 00:37:11.071 Size (in LBAs): 3750748848 (1788GiB) 00:37:11.071 Capacity (in LBAs): 3750748848 (1788GiB) 00:37:11.071 Utilization (in LBAs): 3750748848 (1788GiB) 00:37:11.071 UUID: eccc8389-02c4-405f-b9b6-fe62deb84232 00:37:11.071 Thin Provisioning: Not Supported 00:37:11.071 Per-NS Atomic Units: Yes 00:37:11.071 Atomic Write Unit (Normal): 8 00:37:11.071 Atomic Write Unit (PFail): 8 00:37:11.071 Preferred Write Granularity: 8 00:37:11.071 Atomic Compare & Write Unit: 8 00:37:11.071 Atomic Boundary Size (Normal): 0 00:37:11.071 Atomic Boundary Size (PFail): 0 00:37:11.071 Atomic Boundary Offset: 0 00:37:11.071 NGUID/EUI64 Never Reused: No 00:37:11.071 ANA group ID: 1 00:37:11.071 Namespace Write Protected: No 00:37:11.071 Number of LBA Formats: 1 00:37:11.071 Current LBA Format: LBA Format #00 00:37:11.071 LBA Format #00: Data Size: 512 Metadata Size: 0 00:37:11.071 00:37:11.071 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:37:11.071 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:11.071 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:37:11.071 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:11.071 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:37:11.071 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:11.071 20:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:11.071 rmmod nvme_tcp 00:37:11.071 rmmod nvme_fabrics 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:11.071 20:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:13.616 20:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:13.616 20:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:37:13.616 20:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:13.616 20:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:37:13.616 20:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:13.616 20:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:13.616 20:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:13.616 20:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:13.616 20:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:13.616 20:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:13.616 20:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:16.918 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:16.918 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:17.178 00:37:17.178 real 0m18.693s 00:37:17.178 user 0m5.072s 00:37:17.178 sys 0m10.615s 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:37:17.178 ************************************ 00:37:17.178 END TEST nvmf_identify_kernel_target 00:37:17.178 ************************************ 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.178 ************************************ 00:37:17.178 START TEST nvmf_auth_host 00:37:17.178 ************************************ 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:37:17.178 * Looking for test storage... 00:37:17.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:17.178 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:17.438 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:37:17.438 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:37:17.439 20:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.023 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:24.023 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:37:24.023 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:24.023 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:24.023 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:24.023 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:24.023 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:24.023 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:37:24.023 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:24.023 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:24.024 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:24.024 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:24.024 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:24.024 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:24.024 20:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:24.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:24.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:37:24.285 00:37:24.285 --- 10.0.0.2 ping statistics --- 00:37:24.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.285 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:24.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:24.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:37:24.285 00:37:24.285 --- 10.0.0.1 ping statistics --- 00:37:24.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.285 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3865855 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3865855 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3865855 ']' 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.285 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:24.286 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.287 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:25.287 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:37:25.287 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:25.287 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:25.287 20:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.287 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c7c6d7680d0607079497f814ad208a02 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.SxE 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c7c6d7680d0607079497f814ad208a02 0 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c7c6d7680d0607079497f814ad208a02 0 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c7c6d7680d0607079497f814ad208a02 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.SxE 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.SxE 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.SxE 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=929f651f65443908752a198468538367c3f821a09712a6fcf1a2f18366b8d9f4 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6dX 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 929f651f65443908752a198468538367c3f821a09712a6fcf1a2f18366b8d9f4 3 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 929f651f65443908752a198468538367c3f821a09712a6fcf1a2f18366b8d9f4 3 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=929f651f65443908752a198468538367c3f821a09712a6fcf1a2f18366b8d9f4 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6dX 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6dX 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.6dX 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ac39bcb4812da53125ccb2093f8137d7f0352859efc0a4f3 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OLH 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ac39bcb4812da53125ccb2093f8137d7f0352859efc0a4f3 0 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ac39bcb4812da53125ccb2093f8137d7f0352859efc0a4f3 0 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ac39bcb4812da53125ccb2093f8137d7f0352859efc0a4f3 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OLH 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OLH 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.OLH 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=921e267f9151be3bfd9cff5afa46c33b8173ccbb341a86a0 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fLG 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 921e267f9151be3bfd9cff5afa46c33b8173ccbb341a86a0 2 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 921e267f9151be3bfd9cff5afa46c33b8173ccbb341a86a0 2 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=921e267f9151be3bfd9cff5afa46c33b8173ccbb341a86a0 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fLG 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fLG 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fLG 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=93589abb70541d3a8b7af23a68aa4fd5 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.d7o 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 93589abb70541d3a8b7af23a68aa4fd5 1 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 93589abb70541d3a8b7af23a68aa4fd5 1 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=93589abb70541d3a8b7af23a68aa4fd5 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:37:25.288 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.d7o 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.d7o 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.d7o 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4d305d8177c098e47b0498b7a38d86c3 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IMe 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4d305d8177c098e47b0498b7a38d86c3 1 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4d305d8177c098e47b0498b7a38d86c3 1 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4d305d8177c098e47b0498b7a38d86c3 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IMe 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IMe 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.IMe 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a8162eb0dd76e2c868ea70ffdbc7d5aa7dd7184450cd35a0 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9xE 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a8162eb0dd76e2c868ea70ffdbc7d5aa7dd7184450cd35a0 2 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a8162eb0dd76e2c868ea70ffdbc7d5aa7dd7184450cd35a0 2 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a8162eb0dd76e2c868ea70ffdbc7d5aa7dd7184450cd35a0 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9xE 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9xE 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.9xE 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cf9bcb0434dac15be344b862dabe2671 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pfs 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cf9bcb0434dac15be344b862dabe2671 0 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cf9bcb0434dac15be344b862dabe2671 0 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cf9bcb0434dac15be344b862dabe2671 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pfs 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pfs 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.pfs 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0d71bd01d80e387b61f9c3d38fc30a41d5f892a993631d4d253d07652242531b 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Pba 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0d71bd01d80e387b61f9c3d38fc30a41d5f892a993631d4d253d07652242531b 3 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0d71bd01d80e387b61f9c3d38fc30a41d5f892a993631d4d253d07652242531b 3 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0d71bd01d80e387b61f9c3d38fc30a41d5f892a993631d4d253d07652242531b 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Pba 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Pba 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Pba 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3865855 00:37:25.549 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3865855 ']' 00:37:25.550 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:25.550 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:25.550 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:25.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:25.550 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:25.550 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SxE 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.6dX ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6dX 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.OLH 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fLG ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fLG 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.d7o 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.IMe ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IMe 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9xE 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.pfs ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.pfs 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Pba 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:37:25.810 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:37:26.070 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:26.071 20:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:28.615 Waiting for block devices as requested 00:37:28.615 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:28.615 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:28.876 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:28.876 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:28.876 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:29.137 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:29.137 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:29.137 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:29.396 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:29.396 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:29.656 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:29.656 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:29.656 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:29.656 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:29.916 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:29.916 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:29.916 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:30.858 No valid GPT data, bailing 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:30.858 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:31.119 00:37:31.119 Discovery Log Number of Records 2, Generation counter 2 00:37:31.119 =====Discovery Log Entry 0====== 00:37:31.119 trtype: tcp 00:37:31.119 adrfam: ipv4 00:37:31.119 subtype: current discovery subsystem 00:37:31.119 treq: not specified, sq flow control disable supported 00:37:31.119 portid: 1 00:37:31.119 trsvcid: 4420 00:37:31.119 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:31.119 traddr: 10.0.0.1 00:37:31.119 eflags: none 00:37:31.119 sectype: none 00:37:31.119 =====Discovery Log Entry 1====== 00:37:31.119 trtype: tcp 00:37:31.119 adrfam: ipv4 00:37:31.119 subtype: nvme subsystem 00:37:31.119 treq: not specified, sq flow control disable supported 00:37:31.119 portid: 1 00:37:31.119 trsvcid: 4420 00:37:31.119 subnqn: nqn.2024-02.io.spdk:cnode0 00:37:31.119 traddr: 10.0.0.1 00:37:31.119 eflags: none 00:37:31.119 sectype: none 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:31.119 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.120 20:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.120 nvme0n1 00:37:31.120 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.120 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.120 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.120 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.120 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.120 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.120 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.120 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.120 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.120 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:31.381 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.382 nvme0n1 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.382 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.644 nvme0n1 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.644 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.905 nvme0n1 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:31.905 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:31.906 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:31.906 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.906 20:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.166 nvme0n1 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:32.166 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.167 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.428 nvme0n1 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.428 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.689 nvme0n1 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:32.689 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.690 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.950 nvme0n1 00:37:32.950 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.950 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.950 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.951 20:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.212 nvme0n1 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.212 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.473 nvme0n1 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.473 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.734 nvme0n1 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:33.734 20:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.995 nvme0n1 00:37:33.995 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.256 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.517 nvme0n1 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.517 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.778 nvme0n1 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:34.778 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:34.779 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:34.779 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:34.779 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:37:34.779 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:34.779 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:34.779 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:34.779 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:34.779 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:34.779 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:34.779 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.779 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.039 20:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.300 nvme0n1 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:35.300 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:35.301 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.301 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.562 nvme0n1 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.562 20:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.134 nvme0n1 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.134 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.705 nvme0n1 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.705 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.706 20:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.277 nvme0n1 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.277 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.849 nvme0n1 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.849 20:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.420 nvme0n1 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.420 20:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.362 nvme0n1 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.362 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.934 nvme0n1 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.934 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.194 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.194 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:40.194 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:40.194 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:40.194 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:40.194 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:40.194 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:40.194 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:40.194 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:40.195 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:40.195 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:40.195 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:40.195 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:40.195 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.195 20:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.765 nvme0n1 00:37:40.765 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.765 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:40.765 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:40.765 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.765 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.765 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.765 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:40.765 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:40.765 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.765 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.026 20:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.596 nvme0n1 00:37:41.596 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.596 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:41.596 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:41.596 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.596 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.596 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.596 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:41.596 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:41.596 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.596 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.857 20:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.429 nvme0n1 00:37:42.429 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.429 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:42.429 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:42.429 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.429 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.429 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.429 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:42.429 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:42.690 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.690 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.691 nvme0n1 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.691 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.952 nvme0n1 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:42.952 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.953 20:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.214 nvme0n1 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.214 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.475 nvme0n1 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:43.475 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.476 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.737 nvme0n1 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.737 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.998 nvme0n1 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:43.998 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:43.999 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:43.999 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:43.999 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:43.999 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:43.999 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:43.999 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:43.999 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.999 20:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.260 nvme0n1 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.260 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.521 nvme0n1 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:44.521 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.522 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.783 nvme0n1 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:44.783 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.784 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.072 nvme0n1 00:37:45.072 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.072 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:45.072 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.072 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:45.072 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.072 20:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:45.072 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.073 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.343 nvme0n1 00:37:45.343 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.343 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:45.343 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:45.343 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.343 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.343 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.603 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.864 nvme0n1 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.864 20:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.126 nvme0n1 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.126 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.386 nvme0n1 00:37:46.386 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.386 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:46.386 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.386 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:46.386 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.386 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.647 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:46.647 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:46.647 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.647 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.647 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.647 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:46.647 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.648 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.909 nvme0n1 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.909 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.910 20:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.481 nvme0n1 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.481 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.053 nvme0n1 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.053 20:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.625 nvme0n1 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:48.625 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.626 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.197 nvme0n1 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:49.197 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.198 20:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.459 nvme0n1 00:37:49.459 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.459 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:49.459 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:49.459 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.459 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.459 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.720 20:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.292 nvme0n1 00:37:50.292 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.292 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:50.292 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:50.292 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.292 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.553 20:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.124 nvme0n1 00:37:51.124 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.124 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:51.124 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:51.124 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.124 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:51.385 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.386 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.329 nvme0n1 00:37:52.329 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.329 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:52.329 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:52.329 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.329 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.329 20:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:52.329 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.330 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.901 nvme0n1 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.901 20:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.843 nvme0n1 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:53.843 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.844 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.105 nvme0n1 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.105 20:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.366 nvme0n1 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.366 nvme0n1 00:37:54.366 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:54.627 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.628 nvme0n1 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.628 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.889 nvme0n1 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.889 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.890 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:54.890 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:55.150 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.151 20:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.151 nvme0n1 00:37:55.151 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.151 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:55.151 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:55.151 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.151 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.151 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.412 nvme0n1 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.412 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:55.673 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:55.674 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:55.674 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.674 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.674 nvme0n1 00:37:55.674 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:55.934 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.935 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.195 nvme0n1 00:37:56.195 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.195 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:56.195 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:56.195 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.195 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.195 20:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.195 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:56.195 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:56.195 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.195 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.195 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.195 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:56.195 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.196 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.457 nvme0n1 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.457 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.718 nvme0n1 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:56.718 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.719 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.980 nvme0n1 00:37:56.980 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.980 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:56.980 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:56.980 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.980 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.980 20:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:57.241 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:57.242 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:57.242 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.242 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.503 nvme0n1 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:57.503 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:57.504 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:57.504 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:57.504 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.504 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.765 nvme0n1 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.765 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.026 20:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.288 nvme0n1 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.288 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.861 nvme0n1 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.861 20:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.446 nvme0n1 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:59.446 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.447 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.707 nvme0n1 00:37:59.707 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:59.968 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.969 20:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.538 nvme0n1 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:00.538 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:00.539 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.109 nvme0n1 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzdjNmQ3NjgwZDA2MDcwNzk0OTdmODE0YWQyMDhhMDIfaJqn: 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: ]] 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI5ZjY1MWY2NTQ0MzkwODc1MmExOTg0Njg1MzgzNjdjM2Y4MjFhMDk3MTJhNmZjZjFhMmYxODM2NmI4ZDlmNC1AriE=: 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.109 20:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.679 nvme0n1 00:38:01.679 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.679 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:01.679 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:01.679 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.679 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.679 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.939 20:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.510 nvme0n1 00:38:02.510 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:02.510 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:02.510 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:02.510 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:02.511 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTM1ODlhYmI3MDU0MWQzYThiN2FmMjNhNjhhYTRmZDV/6chW: 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: ]] 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQzMDVkODE3N2MwOThlNDdiMDQ5OGI3YTM4ZDg2YzNvu2Bv: 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:02.771 20:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.341 nvme0n1 00:38:03.341 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.341 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:03.341 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:03.341 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.341 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgxNjJlYjBkZDc2ZTJjODY4ZWE3MGZmZGJjN2Q1YWE3ZGQ3MTg0NDUwY2QzNWEw8T9Ung==: 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: ]] 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Y5YmNiMDQzNGRhYzE1YmUzNDRiODYyZGFiZTI2NzE8f5CK: 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.602 20:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.172 nvme0n1 00:38:04.172 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQ3MWJkMDFkODBlMzg3YjYxZjljM2QzOGZjMzBhNDFkNWY4OTJhOTkzNjMxZDRkMjUzZDA3NjUyMjQyNTMxYkexxqY=: 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:04.499 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:04.500 20:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.069 nvme0n1 00:38:05.069 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:05.069 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:05.070 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:05.070 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:05.070 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.070 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:05.070 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:05.070 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:05.070 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:05.070 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzOWJjYjQ4MTJkYTUzMTI1Y2NiMjA5M2Y4MTM3ZDdmMDM1Mjg1OWVmYzBhNGYzttBdLQ==: 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: ]] 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIxZTI2N2Y5MTUxYmUzYmZkOWNmZjVhZmE0NmMzM2I4MTczY2NiYjM0MWE4NmEw1uGE6g==: 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:05.330 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.331 request: 00:38:05.331 { 00:38:05.331 "name": "nvme0", 00:38:05.331 "trtype": "tcp", 00:38:05.331 "traddr": "10.0.0.1", 00:38:05.331 "adrfam": "ipv4", 00:38:05.331 "trsvcid": "4420", 00:38:05.331 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:05.331 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:05.331 "prchk_reftag": false, 00:38:05.331 "prchk_guard": false, 00:38:05.331 "hdgst": false, 00:38:05.331 "ddgst": false, 00:38:05.331 "method": "bdev_nvme_attach_controller", 00:38:05.331 "req_id": 1 00:38:05.331 } 00:38:05.331 Got JSON-RPC error response 00:38:05.331 response: 00:38:05.331 { 00:38:05.331 "code": -5, 00:38:05.331 "message": "Input/output error" 00:38:05.331 } 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.331 request: 00:38:05.331 { 00:38:05.331 "name": "nvme0", 00:38:05.331 "trtype": "tcp", 00:38:05.331 "traddr": "10.0.0.1", 00:38:05.331 "adrfam": "ipv4", 00:38:05.331 "trsvcid": "4420", 00:38:05.331 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:05.331 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:05.331 "prchk_reftag": false, 00:38:05.331 "prchk_guard": false, 00:38:05.331 "hdgst": false, 00:38:05.331 "ddgst": false, 00:38:05.331 "dhchap_key": "key2", 00:38:05.331 "method": "bdev_nvme_attach_controller", 00:38:05.331 "req_id": 1 00:38:05.331 } 00:38:05.331 Got JSON-RPC error response 00:38:05.331 response: 00:38:05.331 { 00:38:05.331 "code": -5, 00:38:05.331 "message": "Input/output error" 00:38:05.331 } 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.331 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.592 request: 00:38:05.592 { 00:38:05.592 "name": "nvme0", 00:38:05.592 "trtype": "tcp", 00:38:05.592 "traddr": "10.0.0.1", 00:38:05.592 "adrfam": "ipv4", 00:38:05.592 "trsvcid": "4420", 00:38:05.592 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:05.592 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:05.592 "prchk_reftag": false, 00:38:05.592 "prchk_guard": false, 00:38:05.592 "hdgst": false, 00:38:05.592 "ddgst": false, 00:38:05.592 "dhchap_key": "key1", 00:38:05.592 "dhchap_ctrlr_key": "ckey2", 00:38:05.592 "method": "bdev_nvme_attach_controller", 00:38:05.592 "req_id": 1 00:38:05.592 } 00:38:05.592 Got JSON-RPC error response 00:38:05.592 response: 00:38:05.592 { 00:38:05.592 "code": -5, 00:38:05.592 "message": "Input/output error" 00:38:05.592 } 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:05.592 rmmod nvme_tcp 00:38:05.592 rmmod nvme_fabrics 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3865855 ']' 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3865855 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3865855 ']' 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3865855 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3865855 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3865855' 00:38:05.592 killing process with pid 3865855 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3865855 00:38:05.592 20:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3865855 00:38:06.534 20:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:06.534 20:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:06.534 20:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:06.534 20:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:06.534 20:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:06.534 20:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.534 20:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:06.534 20:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:08.447 20:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:11.751 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:11.751 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:11.751 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:11.751 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:11.751 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:11.751 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:11.751 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:12.012 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:12.012 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:12.012 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:12.012 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:12.012 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:12.012 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:12.012 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:12.012 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:12.012 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:12.012 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:12.273 20:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.SxE /tmp/spdk.key-null.OLH /tmp/spdk.key-sha256.d7o /tmp/spdk.key-sha384.9xE /tmp/spdk.key-sha512.Pba /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:38:12.273 20:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:15.594 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:15.594 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:15.594 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:15.594 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:15.594 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:15.855 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:15.855 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:15.855 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:15.855 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:15.855 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:15.855 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:15.855 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:15.855 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:15.855 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:15.855 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:15.855 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:15.855 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:16.116 00:38:16.116 real 0m58.920s 00:38:16.116 user 0m52.200s 00:38:16.116 sys 0m14.904s 00:38:16.116 20:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:16.116 20:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.116 ************************************ 00:38:16.116 END TEST nvmf_auth_host 00:38:16.116 ************************************ 00:38:16.116 20:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:38:16.116 20:45:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:38:16.116 20:45:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:16.116 20:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:16.116 20:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:16.116 20:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.116 ************************************ 00:38:16.116 START TEST nvmf_digest 00:38:16.116 ************************************ 00:38:16.116 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:16.377 * Looking for test storage... 00:38:16.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:38:16.378 20:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:24.524 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:24.525 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:24.525 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:24.525 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:24.525 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:24.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:24.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:38:24.525 00:38:24.525 --- 10.0.0.2 ping statistics --- 00:38:24.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.525 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:24.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:24.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:38:24.525 00:38:24.525 --- 10.0.0.1 ping statistics --- 00:38:24.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.525 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:24.525 ************************************ 00:38:24.525 START TEST nvmf_digest_clean 00:38:24.525 ************************************ 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3883065 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3883065 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3883065 ']' 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:24.525 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.526 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:24.526 20:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:24.526 [2024-07-22 20:45:35.554537] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:24.526 [2024-07-22 20:45:35.554655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:24.526 EAL: No free 2048 kB hugepages reported on node 1 00:38:24.526 [2024-07-22 20:45:35.688895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.526 [2024-07-22 20:45:35.869626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:24.526 [2024-07-22 20:45:35.869672] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:24.526 [2024-07-22 20:45:35.869685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:24.526 [2024-07-22 20:45:35.869695] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:24.526 [2024-07-22 20:45:35.869706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:24.526 [2024-07-22 20:45:35.869733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.526 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:24.526 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:24.526 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:24.526 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:24.526 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:24.526 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:24.526 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:38:24.526 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:38:24.526 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:38:24.526 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:24.526 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:24.786 null0 00:38:24.786 [2024-07-22 20:45:36.594810] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:24.786 [2024-07-22 20:45:36.619034] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.786 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:24.786 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:38:24.786 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:24.786 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:24.786 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3883297 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3883297 /var/tmp/bperf.sock 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3883297 ']' 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:24.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:24.787 20:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:24.787 [2024-07-22 20:45:36.701616] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:24.787 [2024-07-22 20:45:36.701723] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883297 ] 00:38:24.787 EAL: No free 2048 kB hugepages reported on node 1 00:38:25.047 [2024-07-22 20:45:36.829946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.047 [2024-07-22 20:45:37.005964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:25.619 20:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:25.619 20:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:25.619 20:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:25.619 20:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:25.619 20:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:25.880 20:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:25.880 20:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:26.451 nvme0n1 00:38:26.451 20:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:26.452 20:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:26.452 Running I/O for 2 seconds... 00:38:28.366 00:38:28.366 Latency(us) 00:38:28.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:28.366 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:28.366 nvme0n1 : 2.05 17874.47 69.82 0.00 0.00 7055.86 3331.41 46530.56 00:38:28.366 =================================================================================================================== 00:38:28.366 Total : 17874.47 69.82 0.00 0.00 7055.86 3331.41 46530.56 00:38:28.366 0 00:38:28.366 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:28.366 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:28.366 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:28.366 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:28.366 | select(.opcode=="crc32c") 00:38:28.366 | "\(.module_name) \(.executed)"' 00:38:28.366 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3883297 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3883297 ']' 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3883297 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3883297 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3883297' 00:38:28.627 killing process with pid 3883297 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3883297 00:38:28.627 Received shutdown signal, test time was about 2.000000 seconds 00:38:28.627 00:38:28.627 Latency(us) 00:38:28.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:28.627 =================================================================================================================== 00:38:28.627 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:28.627 20:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3883297 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3884107 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3884107 /var/tmp/bperf.sock 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3884107 ']' 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:29.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:29.199 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:29.199 [2024-07-22 20:45:41.155758] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:29.199 [2024-07-22 20:45:41.155872] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884107 ] 00:38:29.199 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:29.199 Zero copy mechanism will not be used. 00:38:29.199 EAL: No free 2048 kB hugepages reported on node 1 00:38:29.459 [2024-07-22 20:45:41.274600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.459 [2024-07-22 20:45:41.410160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:30.032 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:30.032 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:30.032 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:30.032 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:30.032 20:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:30.293 20:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:30.293 20:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:30.863 nvme0n1 00:38:30.863 20:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:30.863 20:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:30.863 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:30.863 Zero copy mechanism will not be used. 00:38:30.863 Running I/O for 2 seconds... 00:38:32.778 00:38:32.778 Latency(us) 00:38:32.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.778 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:32.778 nvme0n1 : 2.00 3191.59 398.95 0.00 0.00 5010.21 1378.99 8028.16 00:38:32.778 =================================================================================================================== 00:38:32.778 Total : 3191.59 398.95 0.00 0.00 5010.21 1378.99 8028.16 00:38:32.778 0 00:38:32.778 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:32.778 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:32.778 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:32.778 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:32.778 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:32.778 | select(.opcode=="crc32c") 00:38:32.778 | "\(.module_name) \(.executed)"' 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3884107 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3884107 ']' 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3884107 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3884107 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3884107' 00:38:33.039 killing process with pid 3884107 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3884107 00:38:33.039 Received shutdown signal, test time was about 2.000000 seconds 00:38:33.039 00:38:33.039 Latency(us) 00:38:33.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.039 =================================================================================================================== 00:38:33.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:33.039 20:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3884107 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3884932 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3884932 /var/tmp/bperf.sock 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3884932 ']' 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:33.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:33.611 20:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:33.611 [2024-07-22 20:45:45.523478] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:33.611 [2024-07-22 20:45:45.523594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884932 ] 00:38:33.611 EAL: No free 2048 kB hugepages reported on node 1 00:38:33.897 [2024-07-22 20:45:45.644842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.897 [2024-07-22 20:45:45.780882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.471 20:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:34.471 20:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:34.471 20:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:34.471 20:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:34.471 20:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:34.731 20:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:34.731 20:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:34.992 nvme0n1 00:38:34.992 20:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:34.992 20:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:34.992 Running I/O for 2 seconds... 00:38:36.904 00:38:36.904 Latency(us) 00:38:36.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.904 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:36.904 nvme0n1 : 2.00 19629.18 76.68 0.00 0.00 6511.44 2512.21 14090.24 00:38:36.904 =================================================================================================================== 00:38:36.904 Total : 19629.18 76.68 0.00 0.00 6511.44 2512.21 14090.24 00:38:36.904 0 00:38:37.164 20:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:37.164 20:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:37.164 20:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:37.164 20:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:37.164 | select(.opcode=="crc32c") 00:38:37.164 | "\(.module_name) \(.executed)"' 00:38:37.164 20:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:37.164 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:37.164 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3884932 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3884932 ']' 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3884932 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3884932 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3884932' 00:38:37.165 killing process with pid 3884932 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3884932 00:38:37.165 Received shutdown signal, test time was about 2.000000 seconds 00:38:37.165 00:38:37.165 Latency(us) 00:38:37.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.165 =================================================================================================================== 00:38:37.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:37.165 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3884932 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3885800 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3885800 /var/tmp/bperf.sock 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3885800 ']' 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:37.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:37.775 20:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:37.775 [2024-07-22 20:45:49.737982] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:37.775 [2024-07-22 20:45:49.738112] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885800 ] 00:38:37.775 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:37.775 Zero copy mechanism will not be used. 00:38:38.036 EAL: No free 2048 kB hugepages reported on node 1 00:38:38.036 [2024-07-22 20:45:49.860924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.036 [2024-07-22 20:45:49.995688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:38.608 20:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:38.608 20:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:38:38.608 20:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:38.608 20:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:38.608 20:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:38.869 20:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:38.869 20:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:39.130 nvme0n1 00:38:39.391 20:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:39.391 20:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:39.391 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:39.391 Zero copy mechanism will not be used. 00:38:39.391 Running I/O for 2 seconds... 00:38:41.305 00:38:41.305 Latency(us) 00:38:41.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.305 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:41.305 nvme0n1 : 2.00 5074.65 634.33 0.00 0.00 3148.50 1802.24 10048.85 00:38:41.305 =================================================================================================================== 00:38:41.305 Total : 5074.65 634.33 0.00 0.00 3148.50 1802.24 10048.85 00:38:41.305 0 00:38:41.305 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:41.305 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:41.305 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:41.305 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:41.305 | select(.opcode=="crc32c") 00:38:41.305 | "\(.module_name) \(.executed)"' 00:38:41.305 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3885800 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3885800 ']' 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3885800 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3885800 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3885800' 00:38:41.566 killing process with pid 3885800 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3885800 00:38:41.566 Received shutdown signal, test time was about 2.000000 seconds 00:38:41.566 00:38:41.566 Latency(us) 00:38:41.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.566 =================================================================================================================== 00:38:41.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:41.566 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3885800 00:38:42.137 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3883065 00:38:42.137 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3883065 ']' 00:38:42.137 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3883065 00:38:42.137 20:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:38:42.137 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:42.137 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3883065 00:38:42.137 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:42.137 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:42.137 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3883065' 00:38:42.137 killing process with pid 3883065 00:38:42.137 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3883065 00:38:42.137 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3883065 00:38:43.080 00:38:43.080 real 0m19.471s 00:38:43.080 user 0m36.844s 00:38:43.080 sys 0m3.830s 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:43.080 ************************************ 00:38:43.080 END TEST nvmf_digest_clean 00:38:43.080 ************************************ 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:43.080 ************************************ 00:38:43.080 START TEST nvmf_digest_error 00:38:43.080 ************************************ 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:43.080 20:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:43.080 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3886843 00:38:43.080 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3886843 00:38:43.080 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:43.080 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3886843 ']' 00:38:43.080 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:43.080 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:43.080 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:43.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:43.080 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:43.080 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:43.080 [2024-07-22 20:45:55.094504] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:43.080 [2024-07-22 20:45:55.094607] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:43.341 EAL: No free 2048 kB hugepages reported on node 1 00:38:43.341 [2024-07-22 20:45:55.216665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.601 [2024-07-22 20:45:55.393236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:43.601 [2024-07-22 20:45:55.393284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:43.601 [2024-07-22 20:45:55.393297] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:43.601 [2024-07-22 20:45:55.393306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:43.601 [2024-07-22 20:45:55.393317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:43.601 [2024-07-22 20:45:55.393354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:44.173 [2024-07-22 20:45:55.931256] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.173 20:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:44.173 null0 00:38:44.174 [2024-07-22 20:45:56.182845] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:44.434 [2024-07-22 20:45:56.207071] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3887036 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3887036 /var/tmp/bperf.sock 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3887036 ']' 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:44.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:44.434 20:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:44.434 [2024-07-22 20:45:56.297760] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:44.434 [2024-07-22 20:45:56.297864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887036 ] 00:38:44.434 EAL: No free 2048 kB hugepages reported on node 1 00:38:44.434 [2024-07-22 20:45:56.420303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.694 [2024-07-22 20:45:56.555497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:45.265 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:45.265 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:45.265 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:45.265 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:45.265 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:45.266 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:45.266 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:45.266 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:45.266 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:45.266 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:45.837 nvme0n1 00:38:45.838 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:45.838 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:45.838 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:45.838 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:45.838 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:45.838 20:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:45.838 Running I/O for 2 seconds... 00:38:45.838 [2024-07-22 20:45:57.689643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.689683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.689696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:45.838 [2024-07-22 20:45:57.700867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.700894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.700905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:45.838 [2024-07-22 20:45:57.715899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.715922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.715932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:45.838 [2024-07-22 20:45:57.730906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.730930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.730940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:45.838 [2024-07-22 20:45:57.747639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.747663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.747673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:45.838 [2024-07-22 20:45:57.763905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.763929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.763938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:45.838 [2024-07-22 20:45:57.779472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.779495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.779505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:45.838 [2024-07-22 20:45:57.794431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.794454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.794463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:45.838 [2024-07-22 20:45:57.809298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.809322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.809331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:45.838 [2024-07-22 20:45:57.825538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.825561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.825570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:45.838 [2024-07-22 20:45:57.836771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.836793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.836802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:45.838 [2024-07-22 20:45:57.852535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:45.838 [2024-07-22 20:45:57.852559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:45.838 [2024-07-22 20:45:57.852568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:57.869024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:57.869048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:57.869057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:57.884373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:57.884396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:57.884405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:57.899896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:57.899919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:57.899928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:57.915264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:57.915287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:57.915297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:57.930113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:57.930137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:57.930150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:57.942306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:57.942330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:57.942339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:57.957134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:57.957157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:57.957166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:57.972232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:57.972254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:57.972263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:57.986218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:57.986242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:57.986251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:58.001368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:58.001392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:58.001465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:58.018172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:58.018195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:58.018210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:58.029088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:58.029111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:58.029120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:58.044438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:58.044461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:58.044470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:58.058513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:58.058539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:58.058548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:58.073063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:58.073135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:58.073145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:58.086516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:58.086538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:58.086547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.100 [2024-07-22 20:45:58.098258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.100 [2024-07-22 20:45:58.098280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.100 [2024-07-22 20:45:58.098289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.101 [2024-07-22 20:45:58.113326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.101 [2024-07-22 20:45:58.113349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.101 [2024-07-22 20:45:58.113358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.362 [2024-07-22 20:45:58.127914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.362 [2024-07-22 20:45:58.127969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.362 [2024-07-22 20:45:58.127979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.362 [2024-07-22 20:45:58.143336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.362 [2024-07-22 20:45:58.143407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.362 [2024-07-22 20:45:58.143416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.362 [2024-07-22 20:45:58.156375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.362 [2024-07-22 20:45:58.156398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.362 [2024-07-22 20:45:58.156407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.362 [2024-07-22 20:45:58.169742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.362 [2024-07-22 20:45:58.169765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.362 [2024-07-22 20:45:58.169777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.362 [2024-07-22 20:45:58.183679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.362 [2024-07-22 20:45:58.183702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.362 [2024-07-22 20:45:58.183711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.362 [2024-07-22 20:45:58.196480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.362 [2024-07-22 20:45:58.196502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.362 [2024-07-22 20:45:58.196511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.362 [2024-07-22 20:45:58.210839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.362 [2024-07-22 20:45:58.210862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.362 [2024-07-22 20:45:58.210871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.362 [2024-07-22 20:45:58.224611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.224633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.224642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.363 [2024-07-22 20:45:58.239821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.239845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.239853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.363 [2024-07-22 20:45:58.253409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.253433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.253442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.363 [2024-07-22 20:45:58.267106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.267129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.267138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.363 [2024-07-22 20:45:58.280765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.280788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.280796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.363 [2024-07-22 20:45:58.294357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.294385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.294394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.363 [2024-07-22 20:45:58.308034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.308057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.308067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.363 [2024-07-22 20:45:58.321706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.321730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.321738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.363 [2024-07-22 20:45:58.335384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.335407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.335425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.363 [2024-07-22 20:45:58.349027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.349050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.349060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.363 [2024-07-22 20:45:58.362690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.362713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.362722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.363 [2024-07-22 20:45:58.376367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.363 [2024-07-22 20:45:58.376390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.363 [2024-07-22 20:45:58.376399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.624 [2024-07-22 20:45:58.390065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.624 [2024-07-22 20:45:58.390088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.624 [2024-07-22 20:45:58.390097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.624 [2024-07-22 20:45:58.403718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.624 [2024-07-22 20:45:58.403741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.624 [2024-07-22 20:45:58.403754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.624 [2024-07-22 20:45:58.417387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.624 [2024-07-22 20:45:58.417409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.624 [2024-07-22 20:45:58.417417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.624 [2024-07-22 20:45:58.431463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.624 [2024-07-22 20:45:58.431485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.624 [2024-07-22 20:45:58.431494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.624 [2024-07-22 20:45:58.445143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.624 [2024-07-22 20:45:58.445165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.624 [2024-07-22 20:45:58.445174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.624 [2024-07-22 20:45:58.458834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.624 [2024-07-22 20:45:58.458856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.624 [2024-07-22 20:45:58.458865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.624 [2024-07-22 20:45:58.472520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.624 [2024-07-22 20:45:58.472542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.624 [2024-07-22 20:45:58.472551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.624 [2024-07-22 20:45:58.486125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.486148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.486157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.625 [2024-07-22 20:45:58.499698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.499720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.499729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.625 [2024-07-22 20:45:58.513352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.513374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.513383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.625 [2024-07-22 20:45:58.527371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.527397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.527406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.625 [2024-07-22 20:45:58.541019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.541041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.541050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.625 [2024-07-22 20:45:58.554637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.554659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.554668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.625 [2024-07-22 20:45:58.570178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.570206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.570215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.625 [2024-07-22 20:45:58.582177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.582199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.582214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.625 [2024-07-22 20:45:58.595475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.595498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.595507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.625 [2024-07-22 20:45:58.608579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.608602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.608610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.625 [2024-07-22 20:45:58.623637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.623659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.623668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.625 [2024-07-22 20:45:58.639828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.625 [2024-07-22 20:45:58.639851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.625 [2024-07-22 20:45:58.639863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.656072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.656095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.656104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.671187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.671216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.671225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.684076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.684099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.684108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.698567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.698590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.698599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.711456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.711477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.711486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.724825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.724847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.724856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.738170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.738193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.738207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.751705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.751728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.751736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.765057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.765081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.765090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.778404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.778426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.778435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.791745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.791767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.791776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.806399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.806422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.806431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.818128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.818152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.818160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.833124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.833146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.833155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.846711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.886 [2024-07-22 20:45:58.846734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.886 [2024-07-22 20:45:58.846744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.886 [2024-07-22 20:45:58.862245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.887 [2024-07-22 20:45:58.862268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.887 [2024-07-22 20:45:58.862277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.887 [2024-07-22 20:45:58.879147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.887 [2024-07-22 20:45:58.879170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.887 [2024-07-22 20:45:58.879179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:46.887 [2024-07-22 20:45:58.894732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:46.887 [2024-07-22 20:45:58.894754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:46.887 [2024-07-22 20:45:58.894762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.148 [2024-07-22 20:45:58.911414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.148 [2024-07-22 20:45:58.911525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.148 [2024-07-22 20:45:58.911536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.148 [2024-07-22 20:45:58.925447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.148 [2024-07-22 20:45:58.925469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.148 [2024-07-22 20:45:58.925477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.148 [2024-07-22 20:45:58.937108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.148 [2024-07-22 20:45:58.937136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.148 [2024-07-22 20:45:58.937144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.148 [2024-07-22 20:45:58.952284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.148 [2024-07-22 20:45:58.952307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.148 [2024-07-22 20:45:58.952315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.148 [2024-07-22 20:45:58.969337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.148 [2024-07-22 20:45:58.969360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.148 [2024-07-22 20:45:58.969369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.148 [2024-07-22 20:45:58.982886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.148 [2024-07-22 20:45:58.982907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.148 [2024-07-22 20:45:58.982917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.148 [2024-07-22 20:45:58.999357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.148 [2024-07-22 20:45:58.999379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.148 [2024-07-22 20:45:58.999387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.148 [2024-07-22 20:45:59.014822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.148 [2024-07-22 20:45:59.014848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.148 [2024-07-22 20:45:59.014857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.148 [2024-07-22 20:45:59.026271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.148 [2024-07-22 20:45:59.026294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.148 [2024-07-22 20:45:59.026303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.148 [2024-07-22 20:45:59.043271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.148 [2024-07-22 20:45:59.043295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.148 [2024-07-22 20:45:59.043304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.148 [2024-07-22 20:45:59.060495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.148 [2024-07-22 20:45:59.060517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.148 [2024-07-22 20:45:59.060526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.149 [2024-07-22 20:45:59.076573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.149 [2024-07-22 20:45:59.076596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.149 [2024-07-22 20:45:59.076605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.149 [2024-07-22 20:45:59.089856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.149 [2024-07-22 20:45:59.089878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.149 [2024-07-22 20:45:59.089886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.149 [2024-07-22 20:45:59.103924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.149 [2024-07-22 20:45:59.103947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.149 [2024-07-22 20:45:59.103957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.149 [2024-07-22 20:45:59.117773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.149 [2024-07-22 20:45:59.117888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.149 [2024-07-22 20:45:59.117897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.149 [2024-07-22 20:45:59.129167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.149 [2024-07-22 20:45:59.129190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.149 [2024-07-22 20:45:59.129199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.149 [2024-07-22 20:45:59.144695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.149 [2024-07-22 20:45:59.144718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.149 [2024-07-22 20:45:59.144727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.149 [2024-07-22 20:45:59.159222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.149 [2024-07-22 20:45:59.159246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.149 [2024-07-22 20:45:59.159254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.410 [2024-07-22 20:45:59.174506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.410 [2024-07-22 20:45:59.174528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.410 [2024-07-22 20:45:59.174537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.410 [2024-07-22 20:45:59.189634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.410 [2024-07-22 20:45:59.189709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.410 [2024-07-22 20:45:59.189720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.410 [2024-07-22 20:45:59.201157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.410 [2024-07-22 20:45:59.201195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.410 [2024-07-22 20:45:59.201211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.410 [2024-07-22 20:45:59.214590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.410 [2024-07-22 20:45:59.214614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.410 [2024-07-22 20:45:59.214622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.410 [2024-07-22 20:45:59.228338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.410 [2024-07-22 20:45:59.228361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.410 [2024-07-22 20:45:59.228370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.410 [2024-07-22 20:45:59.241267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.410 [2024-07-22 20:45:59.241289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.410 [2024-07-22 20:45:59.241299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.410 [2024-07-22 20:45:59.255675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.410 [2024-07-22 20:45:59.255701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.410 [2024-07-22 20:45:59.255710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.410 [2024-07-22 20:45:59.269716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.410 [2024-07-22 20:45:59.269738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.410 [2024-07-22 20:45:59.269746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.410 [2024-07-22 20:45:59.282723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.411 [2024-07-22 20:45:59.282746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.411 [2024-07-22 20:45:59.282755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.411 [2024-07-22 20:45:59.295857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.411 [2024-07-22 20:45:59.295879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.411 [2024-07-22 20:45:59.295888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.411 [2024-07-22 20:45:59.310832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.411 [2024-07-22 20:45:59.310905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.411 [2024-07-22 20:45:59.310915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.411 [2024-07-22 20:45:59.323177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.411 [2024-07-22 20:45:59.323199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.411 [2024-07-22 20:45:59.323213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.411 [2024-07-22 20:45:59.336475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.411 [2024-07-22 20:45:59.336531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.411 [2024-07-22 20:45:59.336540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.411 [2024-07-22 20:45:59.351326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.411 [2024-07-22 20:45:59.351348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.411 [2024-07-22 20:45:59.351423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.411 [2024-07-22 20:45:59.363822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.411 [2024-07-22 20:45:59.363845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.411 [2024-07-22 20:45:59.363854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.411 [2024-07-22 20:45:59.376719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.411 [2024-07-22 20:45:59.376742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.411 [2024-07-22 20:45:59.376750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.411 [2024-07-22 20:45:59.390415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.411 [2024-07-22 20:45:59.390438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.411 [2024-07-22 20:45:59.390446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.411 [2024-07-22 20:45:59.407240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.411 [2024-07-22 20:45:59.407263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.411 [2024-07-22 20:45:59.407272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.411 [2024-07-22 20:45:59.420284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.411 [2024-07-22 20:45:59.420307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.411 [2024-07-22 20:45:59.420315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.672 [2024-07-22 20:45:59.434591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.672 [2024-07-22 20:45:59.434614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.672 [2024-07-22 20:45:59.434624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.672 [2024-07-22 20:45:59.445345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.672 [2024-07-22 20:45:59.445367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.672 [2024-07-22 20:45:59.445377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.672 [2024-07-22 20:45:59.461397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.672 [2024-07-22 20:45:59.461420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.672 [2024-07-22 20:45:59.461429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.672 [2024-07-22 20:45:59.476964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.672 [2024-07-22 20:45:59.476988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.672 [2024-07-22 20:45:59.476997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.672 [2024-07-22 20:45:59.491600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.672 [2024-07-22 20:45:59.491626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.672 [2024-07-22 20:45:59.491635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.672 [2024-07-22 20:45:59.505181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.672 [2024-07-22 20:45:59.505209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.672 [2024-07-22 20:45:59.505218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.672 [2024-07-22 20:45:59.516506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.672 [2024-07-22 20:45:59.516529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.672 [2024-07-22 20:45:59.516538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.672 [2024-07-22 20:45:59.532020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.673 [2024-07-22 20:45:59.532043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.673 [2024-07-22 20:45:59.532052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.673 [2024-07-22 20:45:59.547813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.673 [2024-07-22 20:45:59.547835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.673 [2024-07-22 20:45:59.547844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.673 [2024-07-22 20:45:59.564129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.673 [2024-07-22 20:45:59.564152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.673 [2024-07-22 20:45:59.564161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.673 [2024-07-22 20:45:59.579118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.673 [2024-07-22 20:45:59.579140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.673 [2024-07-22 20:45:59.579149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.673 [2024-07-22 20:45:59.591252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.673 [2024-07-22 20:45:59.591275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.673 [2024-07-22 20:45:59.591284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.673 [2024-07-22 20:45:59.606237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.673 [2024-07-22 20:45:59.606259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.673 [2024-07-22 20:45:59.606268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.673 [2024-07-22 20:45:59.622011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.673 [2024-07-22 20:45:59.622034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.673 [2024-07-22 20:45:59.622043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.673 [2024-07-22 20:45:59.638368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.673 [2024-07-22 20:45:59.638391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.673 [2024-07-22 20:45:59.638399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.673 [2024-07-22 20:45:59.654455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.673 [2024-07-22 20:45:59.654477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.673 [2024-07-22 20:45:59.654486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.673 [2024-07-22 20:45:59.670394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:47.673 [2024-07-22 20:45:59.670417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.673 [2024-07-22 20:45:59.670426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:47.934 00:38:47.934 Latency(us) 00:38:47.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.934 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:47.934 nvme0n1 : 2.05 17479.89 68.28 0.00 0.00 7167.29 3126.61 51118.08 00:38:47.934 =================================================================================================================== 00:38:47.934 Total : 17479.89 68.28 0.00 0.00 7167.29 3126.61 51118.08 00:38:47.934 0 00:38:47.934 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:47.934 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:47.934 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:47.934 | .driver_specific 00:38:47.934 | .nvme_error 00:38:47.934 | .status_code 00:38:47.934 | .command_transient_transport_error' 00:38:47.934 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:47.934 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 140 > 0 )) 00:38:47.934 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3887036 00:38:47.934 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3887036 ']' 00:38:47.934 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3887036 00:38:47.934 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:47.934 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:47.934 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3887036 00:38:48.196 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:48.196 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:48.196 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3887036' 00:38:48.196 killing process with pid 3887036 00:38:48.196 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3887036 00:38:48.196 Received shutdown signal, test time was about 2.000000 seconds 00:38:48.196 00:38:48.196 Latency(us) 00:38:48.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.196 =================================================================================================================== 00:38:48.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:48.196 20:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3887036 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3887895 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3887895 /var/tmp/bperf.sock 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3887895 ']' 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:48.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:48.457 20:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:48.717 [2024-07-22 20:46:00.539330] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:48.717 [2024-07-22 20:46:00.539447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887895 ] 00:38:48.717 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:48.717 Zero copy mechanism will not be used. 00:38:48.717 EAL: No free 2048 kB hugepages reported on node 1 00:38:48.717 [2024-07-22 20:46:00.659668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.978 [2024-07-22 20:46:00.795056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:49.549 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:49.549 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:49.549 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:49.549 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:49.549 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:49.549 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:49.549 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:49.549 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:49.549 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:49.549 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:49.810 nvme0n1 00:38:49.811 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:49.811 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:49.811 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:49.811 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:49.811 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:49.811 20:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:49.811 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:49.811 Zero copy mechanism will not be used. 00:38:49.811 Running I/O for 2 seconds... 00:38:49.811 [2024-07-22 20:46:01.748972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.749013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.749026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:49.811 [2024-07-22 20:46:01.759231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.759261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.759272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:49.811 [2024-07-22 20:46:01.768279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.768305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.768315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:49.811 [2024-07-22 20:46:01.776621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.776645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.776654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:49.811 [2024-07-22 20:46:01.784054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.784081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.784090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:49.811 [2024-07-22 20:46:01.791297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.791320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.791329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:49.811 [2024-07-22 20:46:01.798323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.798346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.798356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:49.811 [2024-07-22 20:46:01.804951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.804974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.804983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:49.811 [2024-07-22 20:46:01.811515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.811539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.811549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:49.811 [2024-07-22 20:46:01.817740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.817764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.817773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:49.811 [2024-07-22 20:46:01.824080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.824104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.824113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:49.811 [2024-07-22 20:46:01.830566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:49.811 [2024-07-22 20:46:01.830589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:49.811 [2024-07-22 20:46:01.830598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.073 [2024-07-22 20:46:01.836607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.073 [2024-07-22 20:46:01.836630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.073 [2024-07-22 20:46:01.836643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.073 [2024-07-22 20:46:01.842741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.073 [2024-07-22 20:46:01.842764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.073 [2024-07-22 20:46:01.842773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.073 [2024-07-22 20:46:01.848935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.073 [2024-07-22 20:46:01.848957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.073 [2024-07-22 20:46:01.848966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.073 [2024-07-22 20:46:01.855344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.073 [2024-07-22 20:46:01.855366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.073 [2024-07-22 20:46:01.855375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.073 [2024-07-22 20:46:01.861647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.073 [2024-07-22 20:46:01.861669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.073 [2024-07-22 20:46:01.861677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.073 [2024-07-22 20:46:01.868148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.073 [2024-07-22 20:46:01.868171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.073 [2024-07-22 20:46:01.868180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.073 [2024-07-22 20:46:01.873988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.073 [2024-07-22 20:46:01.874010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.073 [2024-07-22 20:46:01.874019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.073 [2024-07-22 20:46:01.880266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.880288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.880297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.886577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.886600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.886609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.893019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.893046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.893055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.899400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.899422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.899431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.905731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.905753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.905762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.912275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.912298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.912307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.918254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.918276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.918285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.924532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.924555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.924564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.930332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.930354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.930363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.936493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.936515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.936524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.942707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.942729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.942742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.948707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.948729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.948738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.954644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.954666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.954675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.960808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.960830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.960839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.966987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.967009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.967018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.973228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.973250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.973259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.979461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.979483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.979493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.985466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.985488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.985497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.991379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.991401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.991410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:01.997408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:01.997436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:01.997445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:02.003542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:02.003565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:02.003574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:02.009775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:02.009797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:02.009806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:02.015707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:02.015729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:02.015738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:02.022180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:02.022208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:02.022217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:02.028675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:02.028697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:02.028706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:02.034670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:02.034693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.074 [2024-07-22 20:46:02.034701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.074 [2024-07-22 20:46:02.040581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.074 [2024-07-22 20:46:02.040604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.075 [2024-07-22 20:46:02.040619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.075 [2024-07-22 20:46:02.046681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.075 [2024-07-22 20:46:02.046703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.075 [2024-07-22 20:46:02.046716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.075 [2024-07-22 20:46:02.052728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.075 [2024-07-22 20:46:02.052751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.075 [2024-07-22 20:46:02.052760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.075 [2024-07-22 20:46:02.058665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.075 [2024-07-22 20:46:02.058688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.075 [2024-07-22 20:46:02.058697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.075 [2024-07-22 20:46:02.064849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.075 [2024-07-22 20:46:02.064871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.075 [2024-07-22 20:46:02.064879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.075 [2024-07-22 20:46:02.071258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.075 [2024-07-22 20:46:02.071280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.075 [2024-07-22 20:46:02.071289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.075 [2024-07-22 20:46:02.077506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.075 [2024-07-22 20:46:02.077528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.075 [2024-07-22 20:46:02.077537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.075 [2024-07-22 20:46:02.083503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.075 [2024-07-22 20:46:02.083525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.075 [2024-07-22 20:46:02.083534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.075 [2024-07-22 20:46:02.089438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.075 [2024-07-22 20:46:02.089461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.075 [2024-07-22 20:46:02.089471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.095458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.095481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.095490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.101647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.101673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.101682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.107876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.107899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.107908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.113834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.113856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.113865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.120004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.120026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.120034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.125984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.126007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.126015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.131919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.131941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.131950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.138087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.138108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.138117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.144197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.144225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.144234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.150284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.150305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.150317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.156283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.156306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.156315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.162582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.162604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.162613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.168550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.168572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.168581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.175315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.175340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.175350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.184143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.184168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.184177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.192773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.192796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.192805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.201404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.201427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.201437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.210077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.210100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.210109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.219475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.219502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.219511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.227818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.227840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.227849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.238204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.238227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.238236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.248548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.248571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.248580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.257097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.257120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.337 [2024-07-22 20:46:02.257128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.337 [2024-07-22 20:46:02.267491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.337 [2024-07-22 20:46:02.267513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.338 [2024-07-22 20:46:02.267522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.338 [2024-07-22 20:46:02.278554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.338 [2024-07-22 20:46:02.278577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.338 [2024-07-22 20:46:02.278587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.338 [2024-07-22 20:46:02.289193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.338 [2024-07-22 20:46:02.289222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.338 [2024-07-22 20:46:02.289232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.338 [2024-07-22 20:46:02.300175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.338 [2024-07-22 20:46:02.300197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.338 [2024-07-22 20:46:02.300214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.338 [2024-07-22 20:46:02.311116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.338 [2024-07-22 20:46:02.311138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.338 [2024-07-22 20:46:02.311146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.338 [2024-07-22 20:46:02.321746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.338 [2024-07-22 20:46:02.321769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.338 [2024-07-22 20:46:02.321778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.338 [2024-07-22 20:46:02.331829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.338 [2024-07-22 20:46:02.331852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.338 [2024-07-22 20:46:02.331861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.338 [2024-07-22 20:46:02.342579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.338 [2024-07-22 20:46:02.342602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.338 [2024-07-22 20:46:02.342611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.338 [2024-07-22 20:46:02.354717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.338 [2024-07-22 20:46:02.354740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.338 [2024-07-22 20:46:02.354749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.600 [2024-07-22 20:46:02.364007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.600 [2024-07-22 20:46:02.364030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.600 [2024-07-22 20:46:02.364039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.600 [2024-07-22 20:46:02.373609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.373631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.373640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.383540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.383571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.383582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.393558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.393584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.393593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.402568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.402591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.402600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.410817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.410839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.410848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.418580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.418602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.418611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.425626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.425648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.425656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.432559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.432581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.432590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.439683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.439706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.439715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.446091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.446113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.446121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.452647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.452668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.452677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.458934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.458956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.458965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.465432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.465455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.465464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.472146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.472170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.472180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.478182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.478212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.478222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.484401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.484424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.484433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.490574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.490596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.490606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.496599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.496621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.496631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.502750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.502774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.502783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.508917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.508942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.508951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.515091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.515113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.515122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.521681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.521704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.521713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.528173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.528195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.528211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.534420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.534442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.534451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.540614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.540636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.540645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.546782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.546804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.546813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.601 [2024-07-22 20:46:02.552933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.601 [2024-07-22 20:46:02.552955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.601 [2024-07-22 20:46:02.552964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.602 [2024-07-22 20:46:02.559118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.602 [2024-07-22 20:46:02.559141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.602 [2024-07-22 20:46:02.559150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.602 [2024-07-22 20:46:02.565170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.602 [2024-07-22 20:46:02.565192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.602 [2024-07-22 20:46:02.565206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.602 [2024-07-22 20:46:02.571168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.602 [2024-07-22 20:46:02.571190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.602 [2024-07-22 20:46:02.571204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.602 [2024-07-22 20:46:02.577336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.602 [2024-07-22 20:46:02.577358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.602 [2024-07-22 20:46:02.577366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.602 [2024-07-22 20:46:02.583450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.602 [2024-07-22 20:46:02.583472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.602 [2024-07-22 20:46:02.583481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.602 [2024-07-22 20:46:02.589424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.602 [2024-07-22 20:46:02.589446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.602 [2024-07-22 20:46:02.589456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.602 [2024-07-22 20:46:02.595792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.602 [2024-07-22 20:46:02.595814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.602 [2024-07-22 20:46:02.595823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.602 [2024-07-22 20:46:02.601752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.602 [2024-07-22 20:46:02.601774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.602 [2024-07-22 20:46:02.601783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.602 [2024-07-22 20:46:02.607881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.602 [2024-07-22 20:46:02.607903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.602 [2024-07-22 20:46:02.607911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.602 [2024-07-22 20:46:02.614104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.602 [2024-07-22 20:46:02.614130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.602 [2024-07-22 20:46:02.614138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.602 [2024-07-22 20:46:02.620480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.602 [2024-07-22 20:46:02.620502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.602 [2024-07-22 20:46:02.620510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.626676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.626699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.626708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.632726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.632747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.632756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.638842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.638864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.638873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.644747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.644770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.644779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.650532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.650554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.650563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.656595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.656618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.656628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.662452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.662474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.662483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.668496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.668518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.668527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.676817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.676844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.676855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.685600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.685623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.685636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.692852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.692875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.692883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.701141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.701163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.701171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.709191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.709220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.709229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.717045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.717067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.717076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.724502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.724524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.724533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.731669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.731695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.731704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.738406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.738428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.738437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.746647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.746670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.746679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.754521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.754544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.754553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.761812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.761835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.761844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.770939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.770962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.770971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.779427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.779450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.864 [2024-07-22 20:46:02.779459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.864 [2024-07-22 20:46:02.788172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.864 [2024-07-22 20:46:02.788195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.865 [2024-07-22 20:46:02.788209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.865 [2024-07-22 20:46:02.796888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.865 [2024-07-22 20:46:02.796910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.865 [2024-07-22 20:46:02.796918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.865 [2024-07-22 20:46:02.805212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.865 [2024-07-22 20:46:02.805235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.865 [2024-07-22 20:46:02.805244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.865 [2024-07-22 20:46:02.813497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.865 [2024-07-22 20:46:02.813520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.865 [2024-07-22 20:46:02.813529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.865 [2024-07-22 20:46:02.822012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.865 [2024-07-22 20:46:02.822034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.865 [2024-07-22 20:46:02.822043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.865 [2024-07-22 20:46:02.832166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.865 [2024-07-22 20:46:02.832188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.865 [2024-07-22 20:46:02.832197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.865 [2024-07-22 20:46:02.843317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.865 [2024-07-22 20:46:02.843340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.865 [2024-07-22 20:46:02.843348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:50.865 [2024-07-22 20:46:02.853714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.865 [2024-07-22 20:46:02.853737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.865 [2024-07-22 20:46:02.853746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:50.865 [2024-07-22 20:46:02.864559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.865 [2024-07-22 20:46:02.864583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.865 [2024-07-22 20:46:02.864593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:50.865 [2024-07-22 20:46:02.875532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:50.865 [2024-07-22 20:46:02.875556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:50.865 [2024-07-22 20:46:02.875565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.126 [2024-07-22 20:46:02.886282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.126 [2024-07-22 20:46:02.886308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.126 [2024-07-22 20:46:02.886317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.126 [2024-07-22 20:46:02.895932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.126 [2024-07-22 20:46:02.895955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.126 [2024-07-22 20:46:02.895964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.126 [2024-07-22 20:46:02.906587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.126 [2024-07-22 20:46:02.906610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.126 [2024-07-22 20:46:02.906619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.126 [2024-07-22 20:46:02.916210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:02.916233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:02.916241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:02.926830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:02.926852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:02.926861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:02.938524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:02.938549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:02.938561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:02.946658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:02.946683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:02.946693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:02.955971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:02.955995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:02.956004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:02.964370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:02.964393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:02.964402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:02.975206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:02.975229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:02.975238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:02.985550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:02.985572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:02.985581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:02.995100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:02.995123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:02.995132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.003019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.003041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.003050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.010629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.010651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.010660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.018023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.018045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.018054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.024711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.024732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.024741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.032852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.032877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.032893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.041510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.041538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.041548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.050172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.050195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.050210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.058763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.058786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.058794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.068690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.068713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.068722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.078369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.078392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.078401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.088653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.088675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.088684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.098856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.098880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.098889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.109545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.109567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.109576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.120531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.120555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.120564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.129401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.129425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.129434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.127 [2024-07-22 20:46:03.138884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.127 [2024-07-22 20:46:03.138906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.127 [2024-07-22 20:46:03.138915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.389 [2024-07-22 20:46:03.149455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.389 [2024-07-22 20:46:03.149478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.389 [2024-07-22 20:46:03.149487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.389 [2024-07-22 20:46:03.159765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.389 [2024-07-22 20:46:03.159788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.389 [2024-07-22 20:46:03.159797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.389 [2024-07-22 20:46:03.171020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.389 [2024-07-22 20:46:03.171043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.171052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.182028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.182051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.182060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.192324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.192346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.192356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.204460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.204484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.204493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.214288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.214316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.214325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.224718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.224743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.224753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.234382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.234406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.234415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.245553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.245578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.245586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.257155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.257180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.257188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.267369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.267394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.267403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.276321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.276347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.276357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.285028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.285054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.285063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.295278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.295302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.295312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.304774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.304797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.304806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.316326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.316351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.316360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.327267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.327292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.327301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.338181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.338210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.338220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.347105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.347133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.347142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.357439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.357464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.357473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.367149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.367174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.367183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.376780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.376805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.376815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.385922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.385952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.385961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.393718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.393743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.393753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.390 [2024-07-22 20:46:03.403420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.390 [2024-07-22 20:46:03.403444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.390 [2024-07-22 20:46:03.403454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.411749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.411774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.411783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.420700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.420726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.420735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.431469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.431494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.431503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.440535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.440560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.440569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.449968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.449993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.450002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.459800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.459846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.459854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.468585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.468610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.468619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.477894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.477919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.477929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.486979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.487007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.487017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.495541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.495567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.495576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.504353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.504379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.504389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.514474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.514499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.514508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.525227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.525251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.525260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.535641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.535665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.654 [2024-07-22 20:46:03.535675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.654 [2024-07-22 20:46:03.546346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.654 [2024-07-22 20:46:03.546372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.546386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.556948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.556973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.556983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.567282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.567307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.567316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.577317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.577340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.577350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.586632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.586656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.586666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.597324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.597348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.597358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.607912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.607937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.607946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.618599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.618624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.618633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.629612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.629637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.629646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.639038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.639063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.639073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.648538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.648563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.648572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.658268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.658293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.658302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.655 [2024-07-22 20:46:03.667050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.655 [2024-07-22 20:46:03.667075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.655 [2024-07-22 20:46:03.667083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.917 [2024-07-22 20:46:03.676051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.917 [2024-07-22 20:46:03.676076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.917 [2024-07-22 20:46:03.676085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.917 [2024-07-22 20:46:03.684098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.917 [2024-07-22 20:46:03.684123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.917 [2024-07-22 20:46:03.684132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.917 [2024-07-22 20:46:03.691916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.917 [2024-07-22 20:46:03.691941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.917 [2024-07-22 20:46:03.691950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.917 [2024-07-22 20:46:03.699370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.917 [2024-07-22 20:46:03.699394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.917 [2024-07-22 20:46:03.699403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.917 [2024-07-22 20:46:03.706734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.917 [2024-07-22 20:46:03.706759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.917 [2024-07-22 20:46:03.706771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.917 [2024-07-22 20:46:03.713737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.917 [2024-07-22 20:46:03.713762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.917 [2024-07-22 20:46:03.713771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.917 [2024-07-22 20:46:03.720644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.917 [2024-07-22 20:46:03.720668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.917 [2024-07-22 20:46:03.720677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:51.917 [2024-07-22 20:46:03.727365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.917 [2024-07-22 20:46:03.727389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.917 [2024-07-22 20:46:03.727399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:51.917 [2024-07-22 20:46:03.733959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.917 [2024-07-22 20:46:03.733984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.917 [2024-07-22 20:46:03.733992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:51.917 [2024-07-22 20:46:03.740625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000388e00) 00:38:51.917 [2024-07-22 20:46:03.740650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:51.917 [2024-07-22 20:46:03.740659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:51.917 00:38:51.917 Latency(us) 00:38:51.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:51.917 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:51.917 nvme0n1 : 2.00 3849.24 481.15 0.00 0.00 4152.90 1071.79 12451.84 00:38:51.917 =================================================================================================================== 00:38:51.917 Total : 3849.24 481.15 0.00 0.00 4152.90 1071.79 12451.84 00:38:51.917 0 00:38:51.917 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:51.917 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:51.917 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:51.917 | .driver_specific 00:38:51.917 | .nvme_error 00:38:51.917 | .status_code 00:38:51.917 | .command_transient_transport_error' 00:38:51.917 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:51.917 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 248 > 0 )) 00:38:51.917 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3887895 00:38:51.917 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3887895 ']' 00:38:51.917 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3887895 00:38:51.917 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:51.917 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:51.917 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3887895 00:38:52.178 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:52.178 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:52.178 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3887895' 00:38:52.178 killing process with pid 3887895 00:38:52.178 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3887895 00:38:52.178 Received shutdown signal, test time was about 2.000000 seconds 00:38:52.178 00:38:52.178 Latency(us) 00:38:52.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:52.178 =================================================================================================================== 00:38:52.178 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:52.178 20:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3887895 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3888576 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3888576 /var/tmp/bperf.sock 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3888576 ']' 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:52.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:52.750 20:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:52.751 [2024-07-22 20:46:04.560459] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:52.751 [2024-07-22 20:46:04.560569] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888576 ] 00:38:52.751 EAL: No free 2048 kB hugepages reported on node 1 00:38:52.751 [2024-07-22 20:46:04.682638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.011 [2024-07-22 20:46:04.822356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.586 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:53.586 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:53.586 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:53.586 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:53.586 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:53.586 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.586 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:53.586 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.586 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:53.586 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:53.885 nvme0n1 00:38:53.885 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:53.885 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.885 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:53.885 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.885 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:53.885 20:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:54.146 Running I/O for 2 seconds... 00:38:54.146 [2024-07-22 20:46:05.950352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:38:54.146 [2024-07-22 20:46:05.952167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:05.952206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:05.962343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:38:54.146 [2024-07-22 20:46:05.963724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:05.963750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:05.975710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.146 [2024-07-22 20:46:05.977058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:05.977081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:05.988775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.146 [2024-07-22 20:46:05.990154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:05.990176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:06.001884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.146 [2024-07-22 20:46:06.003255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:06.003283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:06.014937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.146 [2024-07-22 20:46:06.016305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:06.016327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:06.028002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.146 [2024-07-22 20:46:06.029367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:06.029389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:06.041059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.146 [2024-07-22 20:46:06.042434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:06.042455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:06.054112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.146 [2024-07-22 20:46:06.055494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:06.055516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:06.067148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.146 [2024-07-22 20:46:06.068479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:06.068499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:06.080175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.146 [2024-07-22 20:46:06.081536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:06.081557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:06.093355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.146 [2024-07-22 20:46:06.094724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.146 [2024-07-22 20:46:06.094746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.146 [2024-07-22 20:46:06.106412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.146 [2024-07-22 20:46:06.107780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.147 [2024-07-22 20:46:06.107801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.147 [2024-07-22 20:46:06.119488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.147 [2024-07-22 20:46:06.120858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.147 [2024-07-22 20:46:06.120879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.147 [2024-07-22 20:46:06.132553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.147 [2024-07-22 20:46:06.133878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.147 [2024-07-22 20:46:06.133900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.147 [2024-07-22 20:46:06.145583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.147 [2024-07-22 20:46:06.146950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.147 [2024-07-22 20:46:06.146971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.147 [2024-07-22 20:46:06.158635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.147 [2024-07-22 20:46:06.160019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.147 [2024-07-22 20:46:06.160041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.171679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.173036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.173058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.184711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.186072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.186093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.197741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.199112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.199133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.210808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.212170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.212194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.223849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.225217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.225238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.236890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.238260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.238281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.249922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.251292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.251314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.262963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.264338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.264359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.275997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.277341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.277362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.289026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.290376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.290397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.302054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.303417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.303439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.315088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.316462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.316483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.328120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.329506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.329527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.341146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.342509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.342530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.354167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.355508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.355529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.367207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.368530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.368551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.380254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.381579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.381601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.393289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.394654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.394676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.409 [2024-07-22 20:46:06.406340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.409 [2024-07-22 20:46:06.407707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.409 [2024-07-22 20:46:06.407727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.410 [2024-07-22 20:46:06.419433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.410 [2024-07-22 20:46:06.420804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.410 [2024-07-22 20:46:06.420825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.671 [2024-07-22 20:46:06.432478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.671 [2024-07-22 20:46:06.433849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.671 [2024-07-22 20:46:06.433874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.671 [2024-07-22 20:46:06.445513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.671 [2024-07-22 20:46:06.446877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.671 [2024-07-22 20:46:06.446898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.671 [2024-07-22 20:46:06.458532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.671 [2024-07-22 20:46:06.459874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.671 [2024-07-22 20:46:06.459894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.671 [2024-07-22 20:46:06.471560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.671 [2024-07-22 20:46:06.472928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.671 [2024-07-22 20:46:06.472949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.671 [2024-07-22 20:46:06.484602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.671 [2024-07-22 20:46:06.485965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.671 [2024-07-22 20:46:06.485987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.671 [2024-07-22 20:46:06.497636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.671 [2024-07-22 20:46:06.499000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.671 [2024-07-22 20:46:06.499022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.671 [2024-07-22 20:46:06.510665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.671 [2024-07-22 20:46:06.512032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.512054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.523732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.672 [2024-07-22 20:46:06.525100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.525121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.536763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.672 [2024-07-22 20:46:06.538134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.538156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.549829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.672 [2024-07-22 20:46:06.551206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.551228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.562873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.672 [2024-07-22 20:46:06.564209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.564230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.575924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.672 [2024-07-22 20:46:06.577307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.577328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.588977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.672 [2024-07-22 20:46:06.590322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.590343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.602041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:38:54.672 [2024-07-22 20:46:06.603403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.603424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.614197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:38:54.672 [2024-07-22 20:46:06.615549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.615570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.630416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:38:54.672 [2024-07-22 20:46:06.632661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.632681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.642209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:38:54.672 [2024-07-22 20:46:06.643923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.643944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.652686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa3a0 00:38:54.672 [2024-07-22 20:46:06.653689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.653711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.667601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1430 00:38:54.672 [2024-07-22 20:46:06.669307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.669328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.678942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.672 [2024-07-22 20:46:06.679940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.672 [2024-07-22 20:46:06.679962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.672 [2024-07-22 20:46:06.691980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.933 [2024-07-22 20:46:06.692992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.933 [2024-07-22 20:46:06.693013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.933 [2024-07-22 20:46:06.705005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.933 [2024-07-22 20:46:06.706003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.933 [2024-07-22 20:46:06.706023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.933 [2024-07-22 20:46:06.718064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.933 [2024-07-22 20:46:06.719060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.933 [2024-07-22 20:46:06.719081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.933 [2024-07-22 20:46:06.731096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.933 [2024-07-22 20:46:06.732086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.933 [2024-07-22 20:46:06.732108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.933 [2024-07-22 20:46:06.744312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.933 [2024-07-22 20:46:06.745311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.933 [2024-07-22 20:46:06.745332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.933 [2024-07-22 20:46:06.757606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.933 [2024-07-22 20:46:06.758605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.933 [2024-07-22 20:46:06.758626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.933 [2024-07-22 20:46:06.770648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.933 [2024-07-22 20:46:06.771647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.933 [2024-07-22 20:46:06.771671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.933 [2024-07-22 20:46:06.783686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.933 [2024-07-22 20:46:06.784678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.933 [2024-07-22 20:46:06.784699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.933 [2024-07-22 20:46:06.796746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.933 [2024-07-22 20:46:06.797751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.797773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.809819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.934 [2024-07-22 20:46:06.810814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.810835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.822888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.934 [2024-07-22 20:46:06.823903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.823925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.835952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.934 [2024-07-22 20:46:06.836952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.836981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.848989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.934 [2024-07-22 20:46:06.849982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.850003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.862028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.934 [2024-07-22 20:46:06.862988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.863008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.875069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.934 [2024-07-22 20:46:06.876069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.876090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.888117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.934 [2024-07-22 20:46:06.889121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.889142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.901179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:38:54.934 [2024-07-22 20:46:06.902177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.902203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.916086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3e60 00:38:54.934 [2024-07-22 20:46:06.917723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.917745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.926996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:38:54.934 [2024-07-22 20:46:06.928043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.928064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.940801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:54.934 [2024-07-22 20:46:06.941957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:54.934 [2024-07-22 20:46:06.941979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:54.934 [2024-07-22 20:46:06.953885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:06.955033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:06.955054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:06.966930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:06.968091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:06.968113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:06.979985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:06.981137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:06.981158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:06.993033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:06.994189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:06.994218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.006085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.007213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.007235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.019153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.020311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.020332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.032238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.033377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.033399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.045322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.046438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.046460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.058363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.059518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.059540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.071420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.072575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.072596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.084461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.085618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.085639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.097646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.098801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.098822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.110710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.111871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.111893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.123782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.124939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.124960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.136836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.137979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.138001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.149901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.151014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.151035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.162951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.164104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.164126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.176017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.177168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.177189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.189065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.190223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.195 [2024-07-22 20:46:07.190244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.195 [2024-07-22 20:46:07.202100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.195 [2024-07-22 20:46:07.203257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.196 [2024-07-22 20:46:07.203279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.196 [2024-07-22 20:46:07.215152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.196 [2024-07-22 20:46:07.216311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.196 [2024-07-22 20:46:07.216332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.228223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.229372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.229394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.241284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.242444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.242466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.254349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.255511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.255532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.267392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.268545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.268567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.280445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.281598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.281619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.293516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.294672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.294694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.306563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.307715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.307736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.319613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.320772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.320794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.332655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.333811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.333835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.345701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.346855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.346877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.358753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.359906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.359928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.371820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.372975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.372997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.384882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.386039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.386061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.397939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.399096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.399117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.410977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.412137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.412158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.424043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.425194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.425218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.437093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.438245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.438267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.450130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.451284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.451306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.457 [2024-07-22 20:46:07.463196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.457 [2024-07-22 20:46:07.464356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.457 [2024-07-22 20:46:07.464377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.458 [2024-07-22 20:46:07.476229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.458 [2024-07-22 20:46:07.477362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.458 [2024-07-22 20:46:07.477383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.489258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.490408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.490429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.502304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.503423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.503445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.515352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.516505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.516527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.528423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.529574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.529595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.541441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.542604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.542625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.554489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.555651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.555675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.567534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.568696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.568718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.580592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.581752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.581773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.593637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.594788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.594809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.606682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.607838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.607859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.619728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.620883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.620904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.632767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.633927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.633949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.645803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.646951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.646972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.658844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.659996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.660016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.671901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.673064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.673091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.684939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.686098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.686119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.697969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.699121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.699142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.711009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.712133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.712154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.724054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.725210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.725231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.719 [2024-07-22 20:46:07.737102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.719 [2024-07-22 20:46:07.738218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.719 [2024-07-22 20:46:07.738240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.981 [2024-07-22 20:46:07.750337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.981 [2024-07-22 20:46:07.751543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.981 [2024-07-22 20:46:07.751565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.981 [2024-07-22 20:46:07.763386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.981 [2024-07-22 20:46:07.764534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.981 [2024-07-22 20:46:07.764555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.981 [2024-07-22 20:46:07.776414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.981 [2024-07-22 20:46:07.777533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.777555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.789466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.790631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.790652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.802491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.803644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.803665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.815548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.816678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.816699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.828718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.829869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.829890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.841756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.842910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.842931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.854797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.855911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.855932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.867856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.869010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.869031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.880924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.882041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.882062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.894035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.895187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.895217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.907083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.908238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.908260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.920129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.921281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.921302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 [2024-07-22 20:46:07.933175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:38:55.982 [2024-07-22 20:46:07.934338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:55.982 [2024-07-22 20:46:07.934359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:55.982 00:38:55.982 Latency(us) 00:38:55.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.982 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:55.982 nvme0n1 : 2.00 19539.13 76.32 0.00 0.00 6541.60 2443.95 15728.64 00:38:55.982 =================================================================================================================== 00:38:55.982 Total : 19539.13 76.32 0.00 0.00 6541.60 2443.95 15728.64 00:38:55.982 0 00:38:55.982 20:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:55.982 20:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:55.982 20:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:55.982 | .driver_specific 00:38:55.982 | .nvme_error 00:38:55.982 | .status_code 00:38:55.982 | .command_transient_transport_error' 00:38:55.982 20:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3888576 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3888576 ']' 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3888576 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3888576 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3888576' 00:38:56.244 killing process with pid 3888576 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3888576 00:38:56.244 Received shutdown signal, test time was about 2.000000 seconds 00:38:56.244 00:38:56.244 Latency(us) 00:38:56.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:56.244 =================================================================================================================== 00:38:56.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:56.244 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3888576 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3889350 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3889350 /var/tmp/bperf.sock 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3889350 ']' 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:56.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:56.816 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:56.817 20:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:56.817 [2024-07-22 20:46:08.762634] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:56.817 [2024-07-22 20:46:08.762748] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889350 ] 00:38:56.817 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:56.817 Zero copy mechanism will not be used. 00:38:56.817 EAL: No free 2048 kB hugepages reported on node 1 00:38:57.077 [2024-07-22 20:46:08.884462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.077 [2024-07-22 20:46:09.019468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:57.649 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:57.649 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:38:57.649 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:57.649 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:57.649 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:57.649 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.649 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:57.910 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.910 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:57.910 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:57.910 nvme0n1 00:38:57.910 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:57.910 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.910 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:57.910 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.910 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:57.910 20:46:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:58.170 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:58.170 Zero copy mechanism will not be used. 00:38:58.170 Running I/O for 2 seconds... 00:38:58.170 [2024-07-22 20:46:10.019641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.170 [2024-07-22 20:46:10.019963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.170 [2024-07-22 20:46:10.019998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.170 [2024-07-22 20:46:10.032685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.170 [2024-07-22 20:46:10.033085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.170 [2024-07-22 20:46:10.033111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.170 [2024-07-22 20:46:10.044421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.170 [2024-07-22 20:46:10.044783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.170 [2024-07-22 20:46:10.044806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.170 [2024-07-22 20:46:10.055522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.170 [2024-07-22 20:46:10.055802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.170 [2024-07-22 20:46:10.055824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.170 [2024-07-22 20:46:10.066953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.170 [2024-07-22 20:46:10.067326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.170 [2024-07-22 20:46:10.067355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.170 [2024-07-22 20:46:10.077456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.170 [2024-07-22 20:46:10.077720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.170 [2024-07-22 20:46:10.077742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.170 [2024-07-22 20:46:10.088950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.170 [2024-07-22 20:46:10.089302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.170 [2024-07-22 20:46:10.089325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.170 [2024-07-22 20:46:10.100184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.170 [2024-07-22 20:46:10.100475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.170 [2024-07-22 20:46:10.100497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.170 [2024-07-22 20:46:10.112539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.170 [2024-07-22 20:46:10.112889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.170 [2024-07-22 20:46:10.112911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.170 [2024-07-22 20:46:10.125562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.170 [2024-07-22 20:46:10.125975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.170 [2024-07-22 20:46:10.125997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.170 [2024-07-22 20:46:10.139211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.170 [2024-07-22 20:46:10.139601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.171 [2024-07-22 20:46:10.139623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.171 [2024-07-22 20:46:10.151779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.171 [2024-07-22 20:46:10.152131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.171 [2024-07-22 20:46:10.152153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.171 [2024-07-22 20:46:10.164541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.171 [2024-07-22 20:46:10.164702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.171 [2024-07-22 20:46:10.164722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.171 [2024-07-22 20:46:10.177629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.171 [2024-07-22 20:46:10.177998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.171 [2024-07-22 20:46:10.178020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.171 [2024-07-22 20:46:10.190621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.171 [2024-07-22 20:46:10.191032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.171 [2024-07-22 20:46:10.191054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.204525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.204783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.204805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.216220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.216598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.216620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.228793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.229144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.229165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.241583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.241943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.241965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.251674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.252026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.252047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.261581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.261939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.261961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.271657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.272019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.272040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.279710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.279950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.279971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.289803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.289943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.289963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.297112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.297493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.297514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.304007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.304377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.304399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.314301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.314646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.314667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.323691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.323848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.323867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.334281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.334655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.334676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.432 [2024-07-22 20:46:10.344569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.432 [2024-07-22 20:46:10.344923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.432 [2024-07-22 20:46:10.344944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.433 [2024-07-22 20:46:10.354746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.433 [2024-07-22 20:46:10.355109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.433 [2024-07-22 20:46:10.355134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.433 [2024-07-22 20:46:10.362547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.433 [2024-07-22 20:46:10.362888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.433 [2024-07-22 20:46:10.362909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.433 [2024-07-22 20:46:10.373835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.433 [2024-07-22 20:46:10.374192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.433 [2024-07-22 20:46:10.374218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.433 [2024-07-22 20:46:10.385299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.433 [2024-07-22 20:46:10.385675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.433 [2024-07-22 20:46:10.385697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.433 [2024-07-22 20:46:10.394110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.433 [2024-07-22 20:46:10.394492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.433 [2024-07-22 20:46:10.394514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.433 [2024-07-22 20:46:10.401967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.433 [2024-07-22 20:46:10.402453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.433 [2024-07-22 20:46:10.402475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.433 [2024-07-22 20:46:10.410308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.433 [2024-07-22 20:46:10.410672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.433 [2024-07-22 20:46:10.410693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.433 [2024-07-22 20:46:10.419650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.433 [2024-07-22 20:46:10.419888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.433 [2024-07-22 20:46:10.419916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.433 [2024-07-22 20:46:10.430135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.433 [2024-07-22 20:46:10.430457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.433 [2024-07-22 20:46:10.430478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.433 [2024-07-22 20:46:10.439861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.433 [2024-07-22 20:46:10.440193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.433 [2024-07-22 20:46:10.440221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.433 [2024-07-22 20:46:10.450329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.433 [2024-07-22 20:46:10.450687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.433 [2024-07-22 20:46:10.450709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.460080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.460335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.460356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.469972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.470083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.470103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.479720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.480065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.480087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.490246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.490625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.490647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.500300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.500570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.500591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.511505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.511870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.511891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.521156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.521245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.521268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.532247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.532576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.532597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.542389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.542646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.542667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.550916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.551146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.551167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.559992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.560392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.560413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.568347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.568576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.568596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.577616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.577971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.577993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.585558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.585808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.585829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.593342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.593568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.593589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.602073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.602398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.602420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.611060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.611446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.611467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.620166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.620542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.620563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.628807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.629164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.629185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.636835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.695 [2024-07-22 20:46:10.637064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.695 [2024-07-22 20:46:10.637084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.695 [2024-07-22 20:46:10.644212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.696 [2024-07-22 20:46:10.644567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.696 [2024-07-22 20:46:10.644589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.696 [2024-07-22 20:46:10.652997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.696 [2024-07-22 20:46:10.653397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.696 [2024-07-22 20:46:10.653419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.696 [2024-07-22 20:46:10.662512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.696 [2024-07-22 20:46:10.662858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.696 [2024-07-22 20:46:10.662880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.696 [2024-07-22 20:46:10.671406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.696 [2024-07-22 20:46:10.671738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.696 [2024-07-22 20:46:10.671762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.696 [2024-07-22 20:46:10.679055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.696 [2024-07-22 20:46:10.679302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.696 [2024-07-22 20:46:10.679323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.696 [2024-07-22 20:46:10.687666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.696 [2024-07-22 20:46:10.688057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.696 [2024-07-22 20:46:10.688078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.696 [2024-07-22 20:46:10.696104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.696 [2024-07-22 20:46:10.696426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.696 [2024-07-22 20:46:10.696447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.696 [2024-07-22 20:46:10.703324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.696 [2024-07-22 20:46:10.703619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.696 [2024-07-22 20:46:10.703648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.696 [2024-07-22 20:46:10.711439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.696 [2024-07-22 20:46:10.711665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.696 [2024-07-22 20:46:10.711686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.719453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.719724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.719745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.727255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.727497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.727518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.735257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.735641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.735663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.743377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.743731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.743753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.751664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.752022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.752044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.758351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.758671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.758692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.766479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.766707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.766727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.775223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.775447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.775467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.782363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.782696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.782718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.791213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.791620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.791641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.800748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.800973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.800994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.807494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.807718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.807742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.813376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.813599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.813620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.820660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.958 [2024-07-22 20:46:10.820881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.958 [2024-07-22 20:46:10.820901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.958 [2024-07-22 20:46:10.827972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.828195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.828222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.835172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.835405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.835426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.843051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.843280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.843301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.851759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.852123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.852145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.860083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.860475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.860497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.869935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.870287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.870308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.877639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.878034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.878056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.887246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.887512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.887533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.897307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.897720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.897742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.907786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.908187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.908214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.918790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.919176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.919198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.928530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.928883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.928904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.937167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.937546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.937567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.947818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.948246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.948267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.957378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.957616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.957640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.967929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.968364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.968386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:58.959 [2024-07-22 20:46:10.977607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:58.959 [2024-07-22 20:46:10.977856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.959 [2024-07-22 20:46:10.977877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.222 [2024-07-22 20:46:10.988596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.222 [2024-07-22 20:46:10.988921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.222 [2024-07-22 20:46:10.988943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:10.997700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:10.998038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:10.998060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.007179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.007574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.007595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.016461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.016814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.016835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.027029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.027443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.027464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.036320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.036646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.036667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.046171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.046540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.046561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.056581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.056940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.056962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.066662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.066893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.066913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.075851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.076232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.076253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.085418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.085828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.085849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.095121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.095355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.095375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.104099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.104405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.104427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.113232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.113490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.113512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.122677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.123026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.123048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.131458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.131692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.131712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.140887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.141272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.141294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.150259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.150587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.150608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.157714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.158071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.158092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.166788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.167016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.167036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.175486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.175819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.175841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.184688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.184914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.184934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.194294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.194650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.194671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.204352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.204741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.204763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.213420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.213723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.213744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.223250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.223632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.223653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.233508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.223 [2024-07-22 20:46:11.233904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.223 [2024-07-22 20:46:11.233925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.223 [2024-07-22 20:46:11.243182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.487 [2024-07-22 20:46:11.243569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.487 [2024-07-22 20:46:11.243590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.487 [2024-07-22 20:46:11.252489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.487 [2024-07-22 20:46:11.252717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.487 [2024-07-22 20:46:11.252738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.487 [2024-07-22 20:46:11.261956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.487 [2024-07-22 20:46:11.262331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.487 [2024-07-22 20:46:11.262352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.487 [2024-07-22 20:46:11.272738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.487 [2024-07-22 20:46:11.273047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.487 [2024-07-22 20:46:11.273068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.487 [2024-07-22 20:46:11.281383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.281867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.281895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.290583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.290809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.290830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.300944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.301320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.301342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.310462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.310835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.310856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.319924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.320294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.320315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.328980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.329383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.329405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.337643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.338033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.338055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.346947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.347216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.347237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.354853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.355181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.355207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.362604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.362916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.362941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.371478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.371915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.371936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.381616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.381917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.381939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.390197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.390353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.390373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.397037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.397428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.397450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.403650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.404026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.404048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.413039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.413321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.413342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.421313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.421543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.421564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.429774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.430149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.430171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.437952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.438278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.438300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.446313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.446568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.446589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.454391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.454656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.454676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.463775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.463997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.464018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.474224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.474611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.474632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.482652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.482878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.482899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.492146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.492488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.492509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.488 [2024-07-22 20:46:11.502235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.488 [2024-07-22 20:46:11.502544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.488 [2024-07-22 20:46:11.502566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.750 [2024-07-22 20:46:11.512554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.750 [2024-07-22 20:46:11.512904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.750 [2024-07-22 20:46:11.512929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.750 [2024-07-22 20:46:11.523313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.750 [2024-07-22 20:46:11.523699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.750 [2024-07-22 20:46:11.523720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.750 [2024-07-22 20:46:11.533889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.750 [2024-07-22 20:46:11.534282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.750 [2024-07-22 20:46:11.534303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.750 [2024-07-22 20:46:11.543948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.750 [2024-07-22 20:46:11.544176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.750 [2024-07-22 20:46:11.544206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.750 [2024-07-22 20:46:11.553938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.750 [2024-07-22 20:46:11.554185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.750 [2024-07-22 20:46:11.554211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.565042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.565385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.565407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.576033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.576499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.576520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.587171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.587525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.587547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.598746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.598981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.599001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.608862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.609278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.609299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.619003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.619363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.619385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.628746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.629055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.629076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.639580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.639980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.640002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.649886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.650249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.650270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.659333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.659761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.659782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.669329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.669726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.669747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.679181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.679580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.679601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.689561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.689954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.689979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.699352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.699619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.699639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.707775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.707867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.707888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.715354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.715467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.715487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.724193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.724326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.724347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.734901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.735049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.735069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.745565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.745744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.745765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.754722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.754819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.754839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:59.751 [2024-07-22 20:46:11.764922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:38:59.751 [2024-07-22 20:46:11.765048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:59.751 [2024-07-22 20:46:11.765068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.775164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.775328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.775349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.784828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.784920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.784939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.794975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.795163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.795184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.804154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.804273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.804293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.813756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.813849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.813869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.823886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.824086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.824106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.833540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.833699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.833719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.843001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.843099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.843119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.853384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.853642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.853668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.863269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.863376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.863395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.873160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.873302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.873322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.882353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.882485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.882505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.892074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.892233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.892260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.901834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.901987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.902008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.912464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.912556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.912576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:00.014 [2024-07-22 20:46:11.923073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.014 [2024-07-22 20:46:11.923209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.014 [2024-07-22 20:46:11.923230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:00.015 [2024-07-22 20:46:11.932527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.015 [2024-07-22 20:46:11.932659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.015 [2024-07-22 20:46:11.932679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:00.015 [2024-07-22 20:46:11.941804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.015 [2024-07-22 20:46:11.942054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.015 [2024-07-22 20:46:11.942074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:00.015 [2024-07-22 20:46:11.951259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.015 [2024-07-22 20:46:11.951386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.015 [2024-07-22 20:46:11.951406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:00.015 [2024-07-22 20:46:11.960444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.015 [2024-07-22 20:46:11.960542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.015 [2024-07-22 20:46:11.960561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:00.015 [2024-07-22 20:46:11.968661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.015 [2024-07-22 20:46:11.968794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.015 [2024-07-22 20:46:11.968814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:00.015 [2024-07-22 20:46:11.976966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.015 [2024-07-22 20:46:11.977075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.015 [2024-07-22 20:46:11.977095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:00.015 [2024-07-22 20:46:11.984989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.015 [2024-07-22 20:46:11.985154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.015 [2024-07-22 20:46:11.985174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:00.015 [2024-07-22 20:46:11.993380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.015 [2024-07-22 20:46:11.993489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.015 [2024-07-22 20:46:11.993508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:00.015 [2024-07-22 20:46:12.002809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:39:00.015 [2024-07-22 20:46:12.002893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:00.015 [2024-07-22 20:46:12.002913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:00.015 00:39:00.015 Latency(us) 00:39:00.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.015 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:39:00.015 nvme0n1 : 2.00 3244.04 405.50 0.00 0.00 4924.48 2498.56 14854.83 00:39:00.015 =================================================================================================================== 00:39:00.015 Total : 3244.04 405.50 0.00 0.00 4924.48 2498.56 14854.83 00:39:00.015 0 00:39:00.015 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:00.277 | .driver_specific 00:39:00.277 | .nvme_error 00:39:00.277 | .status_code 00:39:00.277 | .command_transient_transport_error' 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3889350 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3889350 ']' 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3889350 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3889350 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3889350' 00:39:00.277 killing process with pid 3889350 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3889350 00:39:00.277 Received shutdown signal, test time was about 2.000000 seconds 00:39:00.277 00:39:00.277 Latency(us) 00:39:00.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.277 =================================================================================================================== 00:39:00.277 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:00.277 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3889350 00:39:00.849 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3886843 00:39:00.849 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3886843 ']' 00:39:00.849 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3886843 00:39:00.849 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:39:00.849 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:00.849 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3886843 00:39:00.849 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:00.849 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:00.849 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3886843' 00:39:00.849 killing process with pid 3886843 00:39:00.849 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3886843 00:39:00.849 20:46:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3886843 00:39:01.792 00:39:01.792 real 0m18.699s 00:39:01.792 user 0m35.499s 00:39:01.792 sys 0m3.549s 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:01.792 ************************************ 00:39:01.792 END TEST nvmf_digest_error 00:39:01.792 ************************************ 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:01.792 rmmod nvme_tcp 00:39:01.792 rmmod nvme_fabrics 00:39:01.792 rmmod nvme_keyring 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3886843 ']' 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3886843 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3886843 ']' 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3886843 00:39:01.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3886843) - No such process 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3886843 is not found' 00:39:01.792 Process with pid 3886843 is not found 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:01.792 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.793 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.793 20:46:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:04.339 00:39:04.339 real 0m47.806s 00:39:04.339 user 1m14.420s 00:39:04.339 sys 0m12.858s 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:04.339 ************************************ 00:39:04.339 END TEST nvmf_digest 00:39:04.339 ************************************ 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.339 ************************************ 00:39:04.339 START TEST nvmf_bdevperf 00:39:04.339 ************************************ 00:39:04.339 20:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:39:04.340 * Looking for test storage... 00:39:04.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:39:04.340 20:46:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:10.927 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:10.927 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:10.927 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:10.927 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:10.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:10.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:39:10.927 00:39:10.927 --- 10.0.0.2 ping statistics --- 00:39:10.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.927 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:39:10.927 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:10.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:10.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:39:10.927 00:39:10.927 --- 10.0.0.1 ping statistics --- 00:39:10.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.927 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3894294 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3894294 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3894294 ']' 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:10.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:10.928 20:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:11.189 [2024-07-22 20:46:22.968725] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:11.189 [2024-07-22 20:46:22.968831] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:11.189 EAL: No free 2048 kB hugepages reported on node 1 00:39:11.189 [2024-07-22 20:46:23.107263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:11.448 [2024-07-22 20:46:23.305796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:11.448 [2024-07-22 20:46:23.305836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:11.448 [2024-07-22 20:46:23.305849] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:11.448 [2024-07-22 20:46:23.305860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:11.448 [2024-07-22 20:46:23.305870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:11.448 [2024-07-22 20:46:23.305919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:11.448 [2024-07-22 20:46:23.306081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:11.448 [2024-07-22 20:46:23.306105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:11.709 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:11.709 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:39:11.709 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:11.709 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:11.709 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:11.969 [2024-07-22 20:46:23.753792] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:11.969 Malloc0 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:11.969 [2024-07-22 20:46:23.868258] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:11.969 { 00:39:11.969 "params": { 00:39:11.969 "name": "Nvme$subsystem", 00:39:11.969 "trtype": "$TEST_TRANSPORT", 00:39:11.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:11.969 "adrfam": "ipv4", 00:39:11.969 "trsvcid": "$NVMF_PORT", 00:39:11.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:11.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:11.969 "hdgst": ${hdgst:-false}, 00:39:11.969 "ddgst": ${ddgst:-false} 00:39:11.969 }, 00:39:11.969 "method": "bdev_nvme_attach_controller" 00:39:11.969 } 00:39:11.969 EOF 00:39:11.969 )") 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:39:11.969 20:46:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:11.969 "params": { 00:39:11.969 "name": "Nvme1", 00:39:11.969 "trtype": "tcp", 00:39:11.969 "traddr": "10.0.0.2", 00:39:11.969 "adrfam": "ipv4", 00:39:11.969 "trsvcid": "4420", 00:39:11.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:11.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:11.969 "hdgst": false, 00:39:11.969 "ddgst": false 00:39:11.969 }, 00:39:11.969 "method": "bdev_nvme_attach_controller" 00:39:11.969 }' 00:39:11.969 [2024-07-22 20:46:23.950421] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:11.969 [2024-07-22 20:46:23.950523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894631 ] 00:39:12.229 EAL: No free 2048 kB hugepages reported on node 1 00:39:12.229 [2024-07-22 20:46:24.058468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.229 [2024-07-22 20:46:24.234989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.800 Running I/O for 1 seconds... 00:39:13.741 00:39:13.741 Latency(us) 00:39:13.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.741 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:13.741 Verification LBA range: start 0x0 length 0x4000 00:39:13.741 Nvme1n1 : 1.05 7803.75 30.48 0.00 0.00 15690.71 3345.07 44127.57 00:39:13.741 =================================================================================================================== 00:39:13.741 Total : 7803.75 30.48 0.00 0.00 15690.71 3345.07 44127.57 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3894988 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:14.684 { 00:39:14.684 "params": { 00:39:14.684 "name": "Nvme$subsystem", 00:39:14.684 "trtype": "$TEST_TRANSPORT", 00:39:14.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:14.684 "adrfam": "ipv4", 00:39:14.684 "trsvcid": "$NVMF_PORT", 00:39:14.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:14.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:14.684 "hdgst": ${hdgst:-false}, 00:39:14.684 "ddgst": ${ddgst:-false} 00:39:14.684 }, 00:39:14.684 "method": "bdev_nvme_attach_controller" 00:39:14.684 } 00:39:14.684 EOF 00:39:14.684 )") 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:39:14.684 20:46:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:14.684 "params": { 00:39:14.684 "name": "Nvme1", 00:39:14.684 "trtype": "tcp", 00:39:14.684 "traddr": "10.0.0.2", 00:39:14.684 "adrfam": "ipv4", 00:39:14.684 "trsvcid": "4420", 00:39:14.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:14.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:14.684 "hdgst": false, 00:39:14.684 "ddgst": false 00:39:14.684 }, 00:39:14.684 "method": "bdev_nvme_attach_controller" 00:39:14.684 }' 00:39:14.684 [2024-07-22 20:46:26.535727] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:14.684 [2024-07-22 20:46:26.535839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894988 ] 00:39:14.684 EAL: No free 2048 kB hugepages reported on node 1 00:39:14.684 [2024-07-22 20:46:26.644083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:14.945 [2024-07-22 20:46:26.823025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:15.518 Running I/O for 15 seconds... 00:39:18.067 20:46:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3894294 00:39:18.067 20:46:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:39:18.067 [2024-07-22 20:46:29.474250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:18.068 [2024-07-22 20:46:29.474902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.474982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.474992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.475015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.475038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.475060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.475082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.475105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.475127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.475153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.475177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.475199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.475227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.068 [2024-07-22 20:46:29.475249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.068 [2024-07-22 20:46:29.475262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.475985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.475997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.476007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.476021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.476031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.476044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.476054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.476066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.476077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.476089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.476100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.476112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.476122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.069 [2024-07-22 20:46:29.476135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.069 [2024-07-22 20:46:29.476144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.476980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.476990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.477003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.477013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.070 [2024-07-22 20:46:29.477025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.070 [2024-07-22 20:46:29.477035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.071 [2024-07-22 20:46:29.477058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.071 [2024-07-22 20:46:29.477081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.071 [2024-07-22 20:46:29.477103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.071 [2024-07-22 20:46:29.477126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.071 [2024-07-22 20:46:29.477149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.071 [2024-07-22 20:46:29.477173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.071 [2024-07-22 20:46:29.477195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.071 [2024-07-22 20:46:29.477222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.071 [2024-07-22 20:46:29.477244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.071 [2024-07-22 20:46:29.477267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000389080 is same with the state(5) to be set 00:39:18.071 [2024-07-22 20:46:29.477297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:18.071 [2024-07-22 20:46:29.477306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:18.071 [2024-07-22 20:46:29.477318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52712 len:8 PRP1 0x0 PRP2 0x0 00:39:18.071 [2024-07-22 20:46:29.477329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.071 [2024-07-22 20:46:29.477535] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000389080 was disconnected and freed. reset controller. 00:39:18.071 [2024-07-22 20:46:29.481425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.071 [2024-07-22 20:46:29.481509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.071 [2024-07-22 20:46:29.482471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.071 [2024-07-22 20:46:29.482518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.071 [2024-07-22 20:46:29.482534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.071 [2024-07-22 20:46:29.482807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.071 [2024-07-22 20:46:29.483050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.071 [2024-07-22 20:46:29.483071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.071 [2024-07-22 20:46:29.483084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.071 [2024-07-22 20:46:29.486847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.071 [2024-07-22 20:46:29.495941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.071 [2024-07-22 20:46:29.496684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.071 [2024-07-22 20:46:29.496734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.071 [2024-07-22 20:46:29.496749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.071 [2024-07-22 20:46:29.497019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.071 [2024-07-22 20:46:29.497272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.071 [2024-07-22 20:46:29.497286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.071 [2024-07-22 20:46:29.497297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.071 [2024-07-22 20:46:29.501047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.071 [2024-07-22 20:46:29.510175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.071 [2024-07-22 20:46:29.510957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.071 [2024-07-22 20:46:29.511002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.071 [2024-07-22 20:46:29.511018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.071 [2024-07-22 20:46:29.511298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.071 [2024-07-22 20:46:29.511541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.071 [2024-07-22 20:46:29.511553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.071 [2024-07-22 20:46:29.511564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.071 [2024-07-22 20:46:29.515319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.071 [2024-07-22 20:46:29.524398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.071 [2024-07-22 20:46:29.525129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.071 [2024-07-22 20:46:29.525174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.071 [2024-07-22 20:46:29.525191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.071 [2024-07-22 20:46:29.525469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.071 [2024-07-22 20:46:29.525711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.071 [2024-07-22 20:46:29.525724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.071 [2024-07-22 20:46:29.525735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.071 [2024-07-22 20:46:29.529490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.071 [2024-07-22 20:46:29.538567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.071 [2024-07-22 20:46:29.539348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.071 [2024-07-22 20:46:29.539392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.071 [2024-07-22 20:46:29.539409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.071 [2024-07-22 20:46:29.539678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.071 [2024-07-22 20:46:29.539925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.071 [2024-07-22 20:46:29.539938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.071 [2024-07-22 20:46:29.539949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.071 [2024-07-22 20:46:29.543714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.071 [2024-07-22 20:46:29.552804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.071 [2024-07-22 20:46:29.553571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.071 [2024-07-22 20:46:29.553617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.071 [2024-07-22 20:46:29.553632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.071 [2024-07-22 20:46:29.553901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.071 [2024-07-22 20:46:29.554142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.071 [2024-07-22 20:46:29.554155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.071 [2024-07-22 20:46:29.554165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.071 [2024-07-22 20:46:29.557938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.071 [2024-07-22 20:46:29.567016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.071 [2024-07-22 20:46:29.567758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.072 [2024-07-22 20:46:29.567803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.072 [2024-07-22 20:46:29.567818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.072 [2024-07-22 20:46:29.568088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.072 [2024-07-22 20:46:29.568339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.072 [2024-07-22 20:46:29.568353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.072 [2024-07-22 20:46:29.568364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.072 [2024-07-22 20:46:29.572123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.072 [2024-07-22 20:46:29.581202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.072 [2024-07-22 20:46:29.581932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.072 [2024-07-22 20:46:29.581978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.072 [2024-07-22 20:46:29.581992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.072 [2024-07-22 20:46:29.582272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.072 [2024-07-22 20:46:29.582514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.072 [2024-07-22 20:46:29.582527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.072 [2024-07-22 20:46:29.582537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.072 [2024-07-22 20:46:29.586294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.072 [2024-07-22 20:46:29.595378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.072 [2024-07-22 20:46:29.596111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.072 [2024-07-22 20:46:29.596156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.072 [2024-07-22 20:46:29.596170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.072 [2024-07-22 20:46:29.596449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.072 [2024-07-22 20:46:29.596691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.072 [2024-07-22 20:46:29.596704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.072 [2024-07-22 20:46:29.596715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.072 [2024-07-22 20:46:29.600468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.072 [2024-07-22 20:46:29.609540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.072 [2024-07-22 20:46:29.610186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.072 [2024-07-22 20:46:29.610214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.072 [2024-07-22 20:46:29.610226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.072 [2024-07-22 20:46:29.610463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.072 [2024-07-22 20:46:29.610698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.072 [2024-07-22 20:46:29.610709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.072 [2024-07-22 20:46:29.610719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.072 [2024-07-22 20:46:29.614468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.072 [2024-07-22 20:46:29.623746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.072 [2024-07-22 20:46:29.624401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.072 [2024-07-22 20:46:29.624424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.072 [2024-07-22 20:46:29.624435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.072 [2024-07-22 20:46:29.624672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.072 [2024-07-22 20:46:29.624907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.072 [2024-07-22 20:46:29.624918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.072 [2024-07-22 20:46:29.624927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.072 [2024-07-22 20:46:29.628676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.072 [2024-07-22 20:46:29.637953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.072 [2024-07-22 20:46:29.638605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.072 [2024-07-22 20:46:29.638631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.072 [2024-07-22 20:46:29.638642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.072 [2024-07-22 20:46:29.638878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.072 [2024-07-22 20:46:29.639113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.072 [2024-07-22 20:46:29.639124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.072 [2024-07-22 20:46:29.639133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.072 [2024-07-22 20:46:29.642878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.072 [2024-07-22 20:46:29.652149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.072 [2024-07-22 20:46:29.652898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.072 [2024-07-22 20:46:29.652943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.072 [2024-07-22 20:46:29.652958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.072 [2024-07-22 20:46:29.653237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.072 [2024-07-22 20:46:29.653479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.072 [2024-07-22 20:46:29.653492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.072 [2024-07-22 20:46:29.653502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.072 [2024-07-22 20:46:29.657265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.072 [2024-07-22 20:46:29.666341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.072 [2024-07-22 20:46:29.667001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.072 [2024-07-22 20:46:29.667046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.072 [2024-07-22 20:46:29.667069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.072 [2024-07-22 20:46:29.667349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.072 [2024-07-22 20:46:29.667597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.072 [2024-07-22 20:46:29.667610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.072 [2024-07-22 20:46:29.667620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.072 [2024-07-22 20:46:29.671377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.073 [2024-07-22 20:46:29.680461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.073 [2024-07-22 20:46:29.681214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.073 [2024-07-22 20:46:29.681259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.073 [2024-07-22 20:46:29.681275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.073 [2024-07-22 20:46:29.681545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.073 [2024-07-22 20:46:29.681790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.073 [2024-07-22 20:46:29.681802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.073 [2024-07-22 20:46:29.681813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.073 [2024-07-22 20:46:29.685570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.073 [2024-07-22 20:46:29.694649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.073 [2024-07-22 20:46:29.695424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.073 [2024-07-22 20:46:29.695476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.073 [2024-07-22 20:46:29.695491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.073 [2024-07-22 20:46:29.695760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.073 [2024-07-22 20:46:29.696001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.073 [2024-07-22 20:46:29.696013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.073 [2024-07-22 20:46:29.696023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.073 [2024-07-22 20:46:29.699787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.073 [2024-07-22 20:46:29.708858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.073 [2024-07-22 20:46:29.709513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.073 [2024-07-22 20:46:29.709558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.073 [2024-07-22 20:46:29.709573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.073 [2024-07-22 20:46:29.709842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.073 [2024-07-22 20:46:29.710082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.073 [2024-07-22 20:46:29.710095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.073 [2024-07-22 20:46:29.710105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.073 [2024-07-22 20:46:29.713902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.073 [2024-07-22 20:46:29.722988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.073 [2024-07-22 20:46:29.723710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.073 [2024-07-22 20:46:29.723754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.073 [2024-07-22 20:46:29.723769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.073 [2024-07-22 20:46:29.724038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.073 [2024-07-22 20:46:29.724291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.073 [2024-07-22 20:46:29.724304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.073 [2024-07-22 20:46:29.724320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.073 [2024-07-22 20:46:29.728074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.073 [2024-07-22 20:46:29.737156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.073 [2024-07-22 20:46:29.737891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.073 [2024-07-22 20:46:29.737936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.073 [2024-07-22 20:46:29.737951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.073 [2024-07-22 20:46:29.738230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.073 [2024-07-22 20:46:29.738471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.073 [2024-07-22 20:46:29.738483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.073 [2024-07-22 20:46:29.738494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.073 [2024-07-22 20:46:29.742254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.073 [2024-07-22 20:46:29.751333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.073 [2024-07-22 20:46:29.752090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.073 [2024-07-22 20:46:29.752135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.073 [2024-07-22 20:46:29.752150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.073 [2024-07-22 20:46:29.752428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.073 [2024-07-22 20:46:29.752669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.073 [2024-07-22 20:46:29.752682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.073 [2024-07-22 20:46:29.752692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.073 [2024-07-22 20:46:29.756444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.073 [2024-07-22 20:46:29.765541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.073 [2024-07-22 20:46:29.766097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.073 [2024-07-22 20:46:29.766143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.073 [2024-07-22 20:46:29.766158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.073 [2024-07-22 20:46:29.766437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.073 [2024-07-22 20:46:29.766679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.073 [2024-07-22 20:46:29.766692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.073 [2024-07-22 20:46:29.766703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.073 [2024-07-22 20:46:29.770466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.073 [2024-07-22 20:46:29.779775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.073 [2024-07-22 20:46:29.780520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.073 [2024-07-22 20:46:29.780570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.073 [2024-07-22 20:46:29.780587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.073 [2024-07-22 20:46:29.780856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.073 [2024-07-22 20:46:29.781097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.073 [2024-07-22 20:46:29.781110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.073 [2024-07-22 20:46:29.781120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.073 [2024-07-22 20:46:29.784886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.073 [2024-07-22 20:46:29.793974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.073 [2024-07-22 20:46:29.794498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.073 [2024-07-22 20:46:29.794522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.073 [2024-07-22 20:46:29.794535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.073 [2024-07-22 20:46:29.794773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.073 [2024-07-22 20:46:29.795009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.073 [2024-07-22 20:46:29.795021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.073 [2024-07-22 20:46:29.795030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.073 [2024-07-22 20:46:29.798797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.073 [2024-07-22 20:46:29.808100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.073 [2024-07-22 20:46:29.808844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.073 [2024-07-22 20:46:29.808888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.073 [2024-07-22 20:46:29.808903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.073 [2024-07-22 20:46:29.809173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.073 [2024-07-22 20:46:29.809423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.073 [2024-07-22 20:46:29.809436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.073 [2024-07-22 20:46:29.809447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.073 [2024-07-22 20:46:29.813199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.074 [2024-07-22 20:46:29.822279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.074 [2024-07-22 20:46:29.823028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.074 [2024-07-22 20:46:29.823073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.074 [2024-07-22 20:46:29.823088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.074 [2024-07-22 20:46:29.823368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.074 [2024-07-22 20:46:29.823614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.074 [2024-07-22 20:46:29.823627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.074 [2024-07-22 20:46:29.823637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.074 [2024-07-22 20:46:29.827388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.074 [2024-07-22 20:46:29.836477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.074 [2024-07-22 20:46:29.837189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.074 [2024-07-22 20:46:29.837241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.074 [2024-07-22 20:46:29.837256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.074 [2024-07-22 20:46:29.837525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.074 [2024-07-22 20:46:29.837766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.074 [2024-07-22 20:46:29.837778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.074 [2024-07-22 20:46:29.837789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.074 [2024-07-22 20:46:29.841639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.074 [2024-07-22 20:46:29.850508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.074 [2024-07-22 20:46:29.851253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.074 [2024-07-22 20:46:29.851298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.074 [2024-07-22 20:46:29.851314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.074 [2024-07-22 20:46:29.851585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.074 [2024-07-22 20:46:29.851825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.074 [2024-07-22 20:46:29.851838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.074 [2024-07-22 20:46:29.851849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.074 [2024-07-22 20:46:29.855612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.074 [2024-07-22 20:46:29.864695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.074 [2024-07-22 20:46:29.865504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.074 [2024-07-22 20:46:29.865550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.074 [2024-07-22 20:46:29.865565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.074 [2024-07-22 20:46:29.865834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.074 [2024-07-22 20:46:29.866074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.074 [2024-07-22 20:46:29.866087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.074 [2024-07-22 20:46:29.866109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.074 [2024-07-22 20:46:29.869869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.074 [2024-07-22 20:46:29.878729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.074 [2024-07-22 20:46:29.879484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.074 [2024-07-22 20:46:29.879529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.074 [2024-07-22 20:46:29.879544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.074 [2024-07-22 20:46:29.879813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.074 [2024-07-22 20:46:29.880054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.074 [2024-07-22 20:46:29.880067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.074 [2024-07-22 20:46:29.880077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.074 [2024-07-22 20:46:29.883830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.074 [2024-07-22 20:46:29.892917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.074 [2024-07-22 20:46:29.893648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.074 [2024-07-22 20:46:29.893693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.074 [2024-07-22 20:46:29.893708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.074 [2024-07-22 20:46:29.893977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.074 [2024-07-22 20:46:29.894228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.074 [2024-07-22 20:46:29.894242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.074 [2024-07-22 20:46:29.894253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.074 [2024-07-22 20:46:29.898006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.074 [2024-07-22 20:46:29.907128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.074 [2024-07-22 20:46:29.907887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.074 [2024-07-22 20:46:29.907931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.074 [2024-07-22 20:46:29.907947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.074 [2024-07-22 20:46:29.908223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.074 [2024-07-22 20:46:29.908464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.074 [2024-07-22 20:46:29.908476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.074 [2024-07-22 20:46:29.908487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.074 [2024-07-22 20:46:29.912246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.074 [2024-07-22 20:46:29.921354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.074 [2024-07-22 20:46:29.922117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.074 [2024-07-22 20:46:29.922162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.074 [2024-07-22 20:46:29.922178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.074 [2024-07-22 20:46:29.922460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.074 [2024-07-22 20:46:29.922701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.074 [2024-07-22 20:46:29.922714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.074 [2024-07-22 20:46:29.922724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.074 [2024-07-22 20:46:29.926477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.074 [2024-07-22 20:46:29.935555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.074 [2024-07-22 20:46:29.936316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.074 [2024-07-22 20:46:29.936361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.074 [2024-07-22 20:46:29.936376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.074 [2024-07-22 20:46:29.936645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.074 [2024-07-22 20:46:29.936885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.074 [2024-07-22 20:46:29.936898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.074 [2024-07-22 20:46:29.936908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.074 [2024-07-22 20:46:29.940673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.074 [2024-07-22 20:46:29.949749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.074 [2024-07-22 20:46:29.950507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.074 [2024-07-22 20:46:29.950552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.074 [2024-07-22 20:46:29.950567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.074 [2024-07-22 20:46:29.950836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.074 [2024-07-22 20:46:29.951077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.074 [2024-07-22 20:46:29.951090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.074 [2024-07-22 20:46:29.951100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.074 [2024-07-22 20:46:29.954864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.075 [2024-07-22 20:46:29.963966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.075 [2024-07-22 20:46:29.964717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.075 [2024-07-22 20:46:29.964762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.075 [2024-07-22 20:46:29.964777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.075 [2024-07-22 20:46:29.965053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.075 [2024-07-22 20:46:29.965305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.075 [2024-07-22 20:46:29.965320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.075 [2024-07-22 20:46:29.965330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.075 [2024-07-22 20:46:29.969079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.075 [2024-07-22 20:46:29.978155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.075 [2024-07-22 20:46:29.978722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.075 [2024-07-22 20:46:29.978767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.075 [2024-07-22 20:46:29.978782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.075 [2024-07-22 20:46:29.979051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.075 [2024-07-22 20:46:29.979302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.075 [2024-07-22 20:46:29.979316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.075 [2024-07-22 20:46:29.979327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.075 [2024-07-22 20:46:29.983084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.075 [2024-07-22 20:46:29.992392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.075 [2024-07-22 20:46:29.993143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.075 [2024-07-22 20:46:29.993188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.075 [2024-07-22 20:46:29.993212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.075 [2024-07-22 20:46:29.993482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.075 [2024-07-22 20:46:29.993723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.075 [2024-07-22 20:46:29.993736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.075 [2024-07-22 20:46:29.993746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.075 [2024-07-22 20:46:29.997507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.075 [2024-07-22 20:46:30.007092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.075 [2024-07-22 20:46:30.007878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.075 [2024-07-22 20:46:30.007924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.075 [2024-07-22 20:46:30.007939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.075 [2024-07-22 20:46:30.008218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.075 [2024-07-22 20:46:30.008460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.075 [2024-07-22 20:46:30.008473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.075 [2024-07-22 20:46:30.008488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.075 [2024-07-22 20:46:30.012243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.075 [2024-07-22 20:46:30.021319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.075 [2024-07-22 20:46:30.022074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.075 [2024-07-22 20:46:30.022120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.075 [2024-07-22 20:46:30.022136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.075 [2024-07-22 20:46:30.022413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.075 [2024-07-22 20:46:30.022655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.075 [2024-07-22 20:46:30.022667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.075 [2024-07-22 20:46:30.022678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.075 [2024-07-22 20:46:30.026434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.075 [2024-07-22 20:46:30.035535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.075 [2024-07-22 20:46:30.036209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.075 [2024-07-22 20:46:30.036234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.075 [2024-07-22 20:46:30.036245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.075 [2024-07-22 20:46:30.036483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.075 [2024-07-22 20:46:30.036719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.075 [2024-07-22 20:46:30.036730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.075 [2024-07-22 20:46:30.036740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.075 [2024-07-22 20:46:30.040494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.075 [2024-07-22 20:46:30.049580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.075 [2024-07-22 20:46:30.050251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.075 [2024-07-22 20:46:30.050281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.075 [2024-07-22 20:46:30.050292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.075 [2024-07-22 20:46:30.050536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.075 [2024-07-22 20:46:30.050772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.075 [2024-07-22 20:46:30.050783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.075 [2024-07-22 20:46:30.050793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.075 [2024-07-22 20:46:30.054555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.075 [2024-07-22 20:46:30.063648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.075 [2024-07-22 20:46:30.064138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.075 [2024-07-22 20:46:30.064162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.075 [2024-07-22 20:46:30.064174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.075 [2024-07-22 20:46:30.064419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.075 [2024-07-22 20:46:30.064656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.075 [2024-07-22 20:46:30.064667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.075 [2024-07-22 20:46:30.064676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.075 [2024-07-22 20:46:30.068432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.075 [2024-07-22 20:46:30.077769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.075 [2024-07-22 20:46:30.078477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.075 [2024-07-22 20:46:30.078522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.075 [2024-07-22 20:46:30.078538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.075 [2024-07-22 20:46:30.078808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.075 [2024-07-22 20:46:30.079050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.075 [2024-07-22 20:46:30.079063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.075 [2024-07-22 20:46:30.079074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.075 [2024-07-22 20:46:30.082842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.377 [2024-07-22 20:46:30.091950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.377 [2024-07-22 20:46:30.092465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.377 [2024-07-22 20:46:30.092489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.377 [2024-07-22 20:46:30.092501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.377 [2024-07-22 20:46:30.092738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.377 [2024-07-22 20:46:30.092975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.377 [2024-07-22 20:46:30.092986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.377 [2024-07-22 20:46:30.092995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.377 [2024-07-22 20:46:30.096752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.377 [2024-07-22 20:46:30.106056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.377 [2024-07-22 20:46:30.106809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.377 [2024-07-22 20:46:30.106854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.377 [2024-07-22 20:46:30.106869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.377 [2024-07-22 20:46:30.107143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.377 [2024-07-22 20:46:30.107393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.377 [2024-07-22 20:46:30.107407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.377 [2024-07-22 20:46:30.107417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.377 [2024-07-22 20:46:30.111170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.377 [2024-07-22 20:46:30.120258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.377 [2024-07-22 20:46:30.121026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.377 [2024-07-22 20:46:30.121072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.377 [2024-07-22 20:46:30.121093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.377 [2024-07-22 20:46:30.121370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.377 [2024-07-22 20:46:30.121611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.377 [2024-07-22 20:46:30.121624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.377 [2024-07-22 20:46:30.121635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.377 [2024-07-22 20:46:30.125391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.377 [2024-07-22 20:46:30.134525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.377 [2024-07-22 20:46:30.135281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.377 [2024-07-22 20:46:30.135326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.377 [2024-07-22 20:46:30.135341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.377 [2024-07-22 20:46:30.135610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.377 [2024-07-22 20:46:30.135851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.377 [2024-07-22 20:46:30.135863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.377 [2024-07-22 20:46:30.135874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.377 [2024-07-22 20:46:30.139636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.377 [2024-07-22 20:46:30.148740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.377 [2024-07-22 20:46:30.149370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.377 [2024-07-22 20:46:30.149395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.377 [2024-07-22 20:46:30.149406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.377 [2024-07-22 20:46:30.149643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.377 [2024-07-22 20:46:30.149879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.377 [2024-07-22 20:46:30.149890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.377 [2024-07-22 20:46:30.149904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.377 [2024-07-22 20:46:30.153662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.377 [2024-07-22 20:46:30.162956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.377 [2024-07-22 20:46:30.163578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.377 [2024-07-22 20:46:30.163601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.377 [2024-07-22 20:46:30.163611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.377 [2024-07-22 20:46:30.163847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.377 [2024-07-22 20:46:30.164083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.377 [2024-07-22 20:46:30.164094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.377 [2024-07-22 20:46:30.164103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.377 [2024-07-22 20:46:30.167859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.377 [2024-07-22 20:46:30.177154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.377 [2024-07-22 20:46:30.177885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.377 [2024-07-22 20:46:30.177931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.377 [2024-07-22 20:46:30.177946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.377 [2024-07-22 20:46:30.178226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.377 [2024-07-22 20:46:30.178467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.377 [2024-07-22 20:46:30.178480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.377 [2024-07-22 20:46:30.178490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.377 [2024-07-22 20:46:30.182244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.377 [2024-07-22 20:46:30.191331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.377 [2024-07-22 20:46:30.192082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.377 [2024-07-22 20:46:30.192126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.377 [2024-07-22 20:46:30.192141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.377 [2024-07-22 20:46:30.192420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.377 [2024-07-22 20:46:30.192662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.377 [2024-07-22 20:46:30.192674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.377 [2024-07-22 20:46:30.192685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.377 [2024-07-22 20:46:30.196444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.377 [2024-07-22 20:46:30.205527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.377 [2024-07-22 20:46:30.206185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.377 [2024-07-22 20:46:30.206215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.377 [2024-07-22 20:46:30.206227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.206465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.206701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.206712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.206722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.210476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.219561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.220086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.220108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.220118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.220362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.220598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.220609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.220619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.224372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.233673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.234434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.234480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.234495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.234764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.235004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.235017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.235028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.238795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.247905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.248692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.248737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.248752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.249028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.249280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.249295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.249306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.253060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.261943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.262703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.262749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.262764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.263033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.263284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.263298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.263309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.267064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.276158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.276909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.276961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.276976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.277254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.277496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.277509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.277519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.281281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.290370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.290912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.290937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.290948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.291185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.291429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.291441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.291455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.295209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.304516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.305170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.305193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.305210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.305447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.305682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.305693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.305702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.309455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.318543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.319218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.319240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.319259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.319495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.319731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.319742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.319752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.323512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.332593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.333196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.333225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.333235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.333471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.333707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.333718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.333727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.337510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.346820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.347580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.347625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.347640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.347909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.348150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.348163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.348173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.351947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.361059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.361785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.361830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.361847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.362116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.362368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.362382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.362392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.366147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.375243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.375883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.375907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.375919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.376156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.376401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.376414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.376423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.378 [2024-07-22 20:46:30.380173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.378 [2024-07-22 20:46:30.389477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.378 [2024-07-22 20:46:30.390129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.378 [2024-07-22 20:46:30.390151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.378 [2024-07-22 20:46:30.390162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.378 [2024-07-22 20:46:30.390411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.378 [2024-07-22 20:46:30.390647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.378 [2024-07-22 20:46:30.390658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.378 [2024-07-22 20:46:30.390668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.653 [2024-07-22 20:46:30.394422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.653 [2024-07-22 20:46:30.403729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.653 [2024-07-22 20:46:30.404470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.653 [2024-07-22 20:46:30.404516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.653 [2024-07-22 20:46:30.404531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.653 [2024-07-22 20:46:30.404801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.653 [2024-07-22 20:46:30.405041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.653 [2024-07-22 20:46:30.405053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.653 [2024-07-22 20:46:30.405064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.653 [2024-07-22 20:46:30.408835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.653 [2024-07-22 20:46:30.417925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.653 [2024-07-22 20:46:30.418697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.653 [2024-07-22 20:46:30.418742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.653 [2024-07-22 20:46:30.418757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.653 [2024-07-22 20:46:30.419026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.653 [2024-07-22 20:46:30.419275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.653 [2024-07-22 20:46:30.419289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.653 [2024-07-22 20:46:30.419300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.653 [2024-07-22 20:46:30.423060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.653 [2024-07-22 20:46:30.432153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.653 [2024-07-22 20:46:30.432919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.653 [2024-07-22 20:46:30.432964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.653 [2024-07-22 20:46:30.432979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.653 [2024-07-22 20:46:30.433257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.653 [2024-07-22 20:46:30.433499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.653 [2024-07-22 20:46:30.433516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.653 [2024-07-22 20:46:30.433526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.653 [2024-07-22 20:46:30.437282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.653 [2024-07-22 20:46:30.446362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.653 [2024-07-22 20:46:30.447114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.653 [2024-07-22 20:46:30.447160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.653 [2024-07-22 20:46:30.447176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.654 [2024-07-22 20:46:30.447455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.654 [2024-07-22 20:46:30.447698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.654 [2024-07-22 20:46:30.447710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.654 [2024-07-22 20:46:30.447721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.654 [2024-07-22 20:46:30.451472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.654 [2024-07-22 20:46:30.460557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.654 [2024-07-22 20:46:30.461137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.654 [2024-07-22 20:46:30.461161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.654 [2024-07-22 20:46:30.461173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.654 [2024-07-22 20:46:30.461416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.654 [2024-07-22 20:46:30.461653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.654 [2024-07-22 20:46:30.461664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.654 [2024-07-22 20:46:30.461673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.654 [2024-07-22 20:46:30.465429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.654 [2024-07-22 20:46:30.474729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.654 [2024-07-22 20:46:30.475534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.654 [2024-07-22 20:46:30.475580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.654 [2024-07-22 20:46:30.475595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.654 [2024-07-22 20:46:30.475874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.654 [2024-07-22 20:46:30.476115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.654 [2024-07-22 20:46:30.476128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.654 [2024-07-22 20:46:30.476139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.654 [2024-07-22 20:46:30.479902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.654 [2024-07-22 20:46:30.488769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.654 [2024-07-22 20:46:30.489362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.654 [2024-07-22 20:46:30.489407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.654 [2024-07-22 20:46:30.489423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.654 [2024-07-22 20:46:30.489692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.654 [2024-07-22 20:46:30.489933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.654 [2024-07-22 20:46:30.489945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.654 [2024-07-22 20:46:30.489956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.654 [2024-07-22 20:46:30.493724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.654 [2024-07-22 20:46:30.502806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.654 [2024-07-22 20:46:30.503535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.654 [2024-07-22 20:46:30.503580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.654 [2024-07-22 20:46:30.503595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.654 [2024-07-22 20:46:30.503864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.654 [2024-07-22 20:46:30.504105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.654 [2024-07-22 20:46:30.504117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.654 [2024-07-22 20:46:30.504128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.654 [2024-07-22 20:46:30.507984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.654 [2024-07-22 20:46:30.516848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.654 [2024-07-22 20:46:30.517688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.654 [2024-07-22 20:46:30.517734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.654 [2024-07-22 20:46:30.517749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.654 [2024-07-22 20:46:30.518018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.654 [2024-07-22 20:46:30.518266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.654 [2024-07-22 20:46:30.518279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.654 [2024-07-22 20:46:30.518290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.654 [2024-07-22 20:46:30.522044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.654 [2024-07-22 20:46:30.530901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.654 [2024-07-22 20:46:30.531583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.654 [2024-07-22 20:46:30.531608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.654 [2024-07-22 20:46:30.531620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.654 [2024-07-22 20:46:30.531861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.654 [2024-07-22 20:46:30.532098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.654 [2024-07-22 20:46:30.532109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.654 [2024-07-22 20:46:30.532118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.654 [2024-07-22 20:46:30.535867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.654 [2024-07-22 20:46:30.544991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.654 [2024-07-22 20:46:30.545758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.654 [2024-07-22 20:46:30.545803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.654 [2024-07-22 20:46:30.545819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.654 [2024-07-22 20:46:30.546088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.654 [2024-07-22 20:46:30.546338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.654 [2024-07-22 20:46:30.546351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.654 [2024-07-22 20:46:30.546362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.654 [2024-07-22 20:46:30.550115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.654 [2024-07-22 20:46:30.559216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.654 [2024-07-22 20:46:30.559965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.654 [2024-07-22 20:46:30.560009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.654 [2024-07-22 20:46:30.560025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.654 [2024-07-22 20:46:30.560302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.654 [2024-07-22 20:46:30.560543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.654 [2024-07-22 20:46:30.560556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.654 [2024-07-22 20:46:30.560566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.654 [2024-07-22 20:46:30.564323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.654 [2024-07-22 20:46:30.573406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.654 [2024-07-22 20:46:30.574063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.654 [2024-07-22 20:46:30.574088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.654 [2024-07-22 20:46:30.574099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.654 [2024-07-22 20:46:30.574342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.654 [2024-07-22 20:46:30.574579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.654 [2024-07-22 20:46:30.574597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.654 [2024-07-22 20:46:30.574606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.654 [2024-07-22 20:46:30.578354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.654 [2024-07-22 20:46:30.587653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.654 [2024-07-22 20:46:30.588318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.654 [2024-07-22 20:46:30.588363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.654 [2024-07-22 20:46:30.588379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.655 [2024-07-22 20:46:30.588648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.655 [2024-07-22 20:46:30.588889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.655 [2024-07-22 20:46:30.588902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.655 [2024-07-22 20:46:30.588912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.655 [2024-07-22 20:46:30.592674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.655 [2024-07-22 20:46:30.601760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.655 [2024-07-22 20:46:30.602483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.655 [2024-07-22 20:46:30.602529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.655 [2024-07-22 20:46:30.602545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.655 [2024-07-22 20:46:30.602814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.655 [2024-07-22 20:46:30.603055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.655 [2024-07-22 20:46:30.603067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.655 [2024-07-22 20:46:30.603078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.655 [2024-07-22 20:46:30.606846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.655 [2024-07-22 20:46:30.615928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.655 [2024-07-22 20:46:30.616606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.655 [2024-07-22 20:46:30.616651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.655 [2024-07-22 20:46:30.616666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.655 [2024-07-22 20:46:30.616935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.655 [2024-07-22 20:46:30.617176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.655 [2024-07-22 20:46:30.617189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.655 [2024-07-22 20:46:30.617207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.655 [2024-07-22 20:46:30.620961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.655 [2024-07-22 20:46:30.630045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.655 [2024-07-22 20:46:30.630697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.655 [2024-07-22 20:46:30.630722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.655 [2024-07-22 20:46:30.630733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.655 [2024-07-22 20:46:30.630970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.655 [2024-07-22 20:46:30.631211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.655 [2024-07-22 20:46:30.631224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.655 [2024-07-22 20:46:30.631233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.655 [2024-07-22 20:46:30.634996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.655 [2024-07-22 20:46:30.644061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.655 [2024-07-22 20:46:30.644637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.655 [2024-07-22 20:46:30.644659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.655 [2024-07-22 20:46:30.644670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.655 [2024-07-22 20:46:30.644906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.655 [2024-07-22 20:46:30.645141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.655 [2024-07-22 20:46:30.645152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.655 [2024-07-22 20:46:30.645162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.655 [2024-07-22 20:46:30.648908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.655 [2024-07-22 20:46:30.658206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.655 [2024-07-22 20:46:30.658847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.655 [2024-07-22 20:46:30.658868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.655 [2024-07-22 20:46:30.658878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.655 [2024-07-22 20:46:30.659114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.655 [2024-07-22 20:46:30.659355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.655 [2024-07-22 20:46:30.659367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.655 [2024-07-22 20:46:30.659376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.655 [2024-07-22 20:46:30.663120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.655 [2024-07-22 20:46:30.672406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.655 [2024-07-22 20:46:30.673422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.655 [2024-07-22 20:46:30.673452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.655 [2024-07-22 20:46:30.673467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.917 [2024-07-22 20:46:30.673712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.917 [2024-07-22 20:46:30.673951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.917 [2024-07-22 20:46:30.673968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.917 [2024-07-22 20:46:30.673978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.917 [2024-07-22 20:46:30.677732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.917 [2024-07-22 20:46:30.686599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.917 [2024-07-22 20:46:30.687156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.918 [2024-07-22 20:46:30.687209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.918 [2024-07-22 20:46:30.687226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.918 [2024-07-22 20:46:30.687495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.918 [2024-07-22 20:46:30.687737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.918 [2024-07-22 20:46:30.687750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.918 [2024-07-22 20:46:30.687761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.918 [2024-07-22 20:46:30.691515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.918 [2024-07-22 20:46:30.700806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.918 [2024-07-22 20:46:30.701516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.918 [2024-07-22 20:46:30.701561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.918 [2024-07-22 20:46:30.701576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.918 [2024-07-22 20:46:30.701846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.918 [2024-07-22 20:46:30.702087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.918 [2024-07-22 20:46:30.702099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.918 [2024-07-22 20:46:30.702110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.918 [2024-07-22 20:46:30.705868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.918 [2024-07-22 20:46:30.714950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.918 [2024-07-22 20:46:30.715716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.918 [2024-07-22 20:46:30.715761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.918 [2024-07-22 20:46:30.715777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.918 [2024-07-22 20:46:30.716045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.918 [2024-07-22 20:46:30.716295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.918 [2024-07-22 20:46:30.716313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.918 [2024-07-22 20:46:30.716323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.918 [2024-07-22 20:46:30.720078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.918 [2024-07-22 20:46:30.729165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.918 [2024-07-22 20:46:30.729927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.918 [2024-07-22 20:46:30.729972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.918 [2024-07-22 20:46:30.729987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.918 [2024-07-22 20:46:30.730262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.918 [2024-07-22 20:46:30.730503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.918 [2024-07-22 20:46:30.730516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.918 [2024-07-22 20:46:30.730526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.918 [2024-07-22 20:46:30.734284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.918 [2024-07-22 20:46:30.743374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.918 [2024-07-22 20:46:30.744130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.918 [2024-07-22 20:46:30.744175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.918 [2024-07-22 20:46:30.744191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.918 [2024-07-22 20:46:30.744469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.918 [2024-07-22 20:46:30.744710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.918 [2024-07-22 20:46:30.744723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.918 [2024-07-22 20:46:30.744734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.918 [2024-07-22 20:46:30.748705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.918 [2024-07-22 20:46:30.757612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.918 [2024-07-22 20:46:30.758423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.918 [2024-07-22 20:46:30.758468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.918 [2024-07-22 20:46:30.758484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.918 [2024-07-22 20:46:30.758753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.918 [2024-07-22 20:46:30.758994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.918 [2024-07-22 20:46:30.759006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.918 [2024-07-22 20:46:30.759017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.918 [2024-07-22 20:46:30.762777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.918 [2024-07-22 20:46:30.771646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.918 [2024-07-22 20:46:30.772310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.918 [2024-07-22 20:46:30.772356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.918 [2024-07-22 20:46:30.772373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.918 [2024-07-22 20:46:30.772642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.918 [2024-07-22 20:46:30.772882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.918 [2024-07-22 20:46:30.772895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.918 [2024-07-22 20:46:30.772905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.918 [2024-07-22 20:46:30.776670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.918 [2024-07-22 20:46:30.785750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.918 [2024-07-22 20:46:30.786497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.918 [2024-07-22 20:46:30.786542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.918 [2024-07-22 20:46:30.786558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.918 [2024-07-22 20:46:30.786827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.918 [2024-07-22 20:46:30.787067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.918 [2024-07-22 20:46:30.787080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.918 [2024-07-22 20:46:30.787091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.918 [2024-07-22 20:46:30.790853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.918 [2024-07-22 20:46:30.799933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.918 [2024-07-22 20:46:30.800696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.918 [2024-07-22 20:46:30.800741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.919 [2024-07-22 20:46:30.800757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.919 [2024-07-22 20:46:30.801026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.919 [2024-07-22 20:46:30.801276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.919 [2024-07-22 20:46:30.801289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.919 [2024-07-22 20:46:30.801300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.919 [2024-07-22 20:46:30.805055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.919 [2024-07-22 20:46:30.814134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.919 [2024-07-22 20:46:30.814786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.919 [2024-07-22 20:46:30.814810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.919 [2024-07-22 20:46:30.814825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.919 [2024-07-22 20:46:30.815063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.919 [2024-07-22 20:46:30.815306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.919 [2024-07-22 20:46:30.815318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.919 [2024-07-22 20:46:30.815328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.919 [2024-07-22 20:46:30.819078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.919 [2024-07-22 20:46:30.828375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.919 [2024-07-22 20:46:30.829023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.919 [2024-07-22 20:46:30.829045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.919 [2024-07-22 20:46:30.829056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.919 [2024-07-22 20:46:30.829297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.919 [2024-07-22 20:46:30.829533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.919 [2024-07-22 20:46:30.829544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.919 [2024-07-22 20:46:30.829554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.919 [2024-07-22 20:46:30.833297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.919 [2024-07-22 20:46:30.842582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.919 [2024-07-22 20:46:30.843204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.919 [2024-07-22 20:46:30.843226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.919 [2024-07-22 20:46:30.843237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.919 [2024-07-22 20:46:30.843473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.919 [2024-07-22 20:46:30.843708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.919 [2024-07-22 20:46:30.843719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.919 [2024-07-22 20:46:30.843728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.919 [2024-07-22 20:46:30.847472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.919 [2024-07-22 20:46:30.856751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.919 [2024-07-22 20:46:30.857417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.919 [2024-07-22 20:46:30.857462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.919 [2024-07-22 20:46:30.857478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.919 [2024-07-22 20:46:30.857747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.919 [2024-07-22 20:46:30.857988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.919 [2024-07-22 20:46:30.858004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.919 [2024-07-22 20:46:30.858015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.919 [2024-07-22 20:46:30.861785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.919 [2024-07-22 20:46:30.870885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.919 [2024-07-22 20:46:30.871570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.919 [2024-07-22 20:46:30.871614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.919 [2024-07-22 20:46:30.871630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.919 [2024-07-22 20:46:30.871900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.919 [2024-07-22 20:46:30.872141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.919 [2024-07-22 20:46:30.872153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.919 [2024-07-22 20:46:30.872164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.919 [2024-07-22 20:46:30.875933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.919 [2024-07-22 20:46:30.885018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.919 [2024-07-22 20:46:30.885654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.919 [2024-07-22 20:46:30.885700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.919 [2024-07-22 20:46:30.885715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.919 [2024-07-22 20:46:30.885984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.919 [2024-07-22 20:46:30.886234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.919 [2024-07-22 20:46:30.886247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.919 [2024-07-22 20:46:30.886258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.919 [2024-07-22 20:46:30.890008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.919 [2024-07-22 20:46:30.899081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.919 [2024-07-22 20:46:30.899755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.919 [2024-07-22 20:46:30.899779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.919 [2024-07-22 20:46:30.899791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.919 [2024-07-22 20:46:30.900028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.919 [2024-07-22 20:46:30.900269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.919 [2024-07-22 20:46:30.900280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.919 [2024-07-22 20:46:30.900290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.919 [2024-07-22 20:46:30.904029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.919 [2024-07-22 20:46:30.913102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.919 [2024-07-22 20:46:30.913767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.919 [2024-07-22 20:46:30.913789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.919 [2024-07-22 20:46:30.913800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.919 [2024-07-22 20:46:30.914036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.919 [2024-07-22 20:46:30.914276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.919 [2024-07-22 20:46:30.914288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.919 [2024-07-22 20:46:30.914297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.919 [2024-07-22 20:46:30.918044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:18.919 [2024-07-22 20:46:30.927333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.919 [2024-07-22 20:46:30.928029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:18.919 [2024-07-22 20:46:30.928074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:18.919 [2024-07-22 20:46:30.928088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:18.919 [2024-07-22 20:46:30.928368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:18.919 [2024-07-22 20:46:30.928609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:18.919 [2024-07-22 20:46:30.928622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:18.919 [2024-07-22 20:46:30.928632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.919 [2024-07-22 20:46:30.932384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.180 [2024-07-22 20:46:30.941456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.180 [2024-07-22 20:46:30.942195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.180 [2024-07-22 20:46:30.942246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.180 [2024-07-22 20:46:30.942263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.180 [2024-07-22 20:46:30.942533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.180 [2024-07-22 20:46:30.942774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.180 [2024-07-22 20:46:30.942786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.180 [2024-07-22 20:46:30.942797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.180 [2024-07-22 20:46:30.946559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.180 [2024-07-22 20:46:30.955644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.180 [2024-07-22 20:46:30.956452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.180 [2024-07-22 20:46:30.956497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.180 [2024-07-22 20:46:30.956517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.180 [2024-07-22 20:46:30.956787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.180 [2024-07-22 20:46:30.957027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.180 [2024-07-22 20:46:30.957040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.180 [2024-07-22 20:46:30.957050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.180 [2024-07-22 20:46:30.960848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.180 [2024-07-22 20:46:30.969707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.180 [2024-07-22 20:46:30.970508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.180 [2024-07-22 20:46:30.970553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.180 [2024-07-22 20:46:30.970568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.180 [2024-07-22 20:46:30.970837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.180 [2024-07-22 20:46:30.971078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.180 [2024-07-22 20:46:30.971090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.180 [2024-07-22 20:46:30.971101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.180 [2024-07-22 20:46:30.974859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.180 [2024-07-22 20:46:30.983935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.180 [2024-07-22 20:46:30.984617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.180 [2024-07-22 20:46:30.984642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.180 [2024-07-22 20:46:30.984654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.180 [2024-07-22 20:46:30.984892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.180 [2024-07-22 20:46:30.985128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:30.985140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:30.985149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:30.988894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:30.997954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:30.998713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:30.998758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:30.998774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:30.999045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:30.999299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:30.999313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:30.999324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.003075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.012146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.012906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.012952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.012967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.013243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.013484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.013497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.013507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.017264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.026351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.026998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.027022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.027034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.027279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.027517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.027528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.027537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.031285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.040574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.041197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.041249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.041266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.041537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.041778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.041791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.041801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.045561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.054637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.055296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.055321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.055333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.055572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.055808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.055819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.055829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.059593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.068666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.069319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.069364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.069380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.069649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.069890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.069902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.069913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.073674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.082759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.083503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.083548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.083563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.083832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.084073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.084086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.084097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.087861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.096943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.097710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.097755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.097774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.098044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.098294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.098308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.098319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.102072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.111147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.111899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.111944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.111959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.112238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.112479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.112492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.112502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.116258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.125344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.126090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.126134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.126149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.126428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.126670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.126682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.126693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.130447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.139514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.140073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.140097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.140109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.140352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.140593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.140605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.140614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.144358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.153641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.154422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.154467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.154482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.154751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.154993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.155006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.155016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.158790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.167872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.168584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.168630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.168645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.168914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.169155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.169168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.169178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.172940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.182010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.182775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.182820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.182835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.183104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.183355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.183369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.183380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.187132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.181 [2024-07-22 20:46:31.196208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.181 [2024-07-22 20:46:31.196923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.181 [2024-07-22 20:46:31.196969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.181 [2024-07-22 20:46:31.196984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.181 [2024-07-22 20:46:31.197262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.181 [2024-07-22 20:46:31.197503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.181 [2024-07-22 20:46:31.197516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.181 [2024-07-22 20:46:31.197526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.181 [2024-07-22 20:46:31.201277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.443 [2024-07-22 20:46:31.210347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.443 [2024-07-22 20:46:31.211069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.443 [2024-07-22 20:46:31.211114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.443 [2024-07-22 20:46:31.211129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.443 [2024-07-22 20:46:31.211407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.443 [2024-07-22 20:46:31.211649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.443 [2024-07-22 20:46:31.211661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.443 [2024-07-22 20:46:31.211671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.443 [2024-07-22 20:46:31.215429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.443 [2024-07-22 20:46:31.224495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.443 [2024-07-22 20:46:31.225286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.444 [2024-07-22 20:46:31.225332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.444 [2024-07-22 20:46:31.225347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.444 [2024-07-22 20:46:31.225616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.444 [2024-07-22 20:46:31.225857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.444 [2024-07-22 20:46:31.225870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.444 [2024-07-22 20:46:31.225880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.444 [2024-07-22 20:46:31.229647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.444 [2024-07-22 20:46:31.238726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.444 [2024-07-22 20:46:31.239484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.444 [2024-07-22 20:46:31.239529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.444 [2024-07-22 20:46:31.239552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.444 [2024-07-22 20:46:31.239822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.444 [2024-07-22 20:46:31.240063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.444 [2024-07-22 20:46:31.240075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.444 [2024-07-22 20:46:31.240086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.444 [2024-07-22 20:46:31.243846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.444 [2024-07-22 20:46:31.252926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.444 [2024-07-22 20:46:31.253628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.444 [2024-07-22 20:46:31.253673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.444 [2024-07-22 20:46:31.253688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.444 [2024-07-22 20:46:31.253957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.444 [2024-07-22 20:46:31.254198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.444 [2024-07-22 20:46:31.254221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.444 [2024-07-22 20:46:31.254232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.444 [2024-07-22 20:46:31.257991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.444 [2024-07-22 20:46:31.267075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.444 [2024-07-22 20:46:31.267799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.444 [2024-07-22 20:46:31.267843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.444 [2024-07-22 20:46:31.267858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.444 [2024-07-22 20:46:31.268128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.444 [2024-07-22 20:46:31.268377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.444 [2024-07-22 20:46:31.268391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.444 [2024-07-22 20:46:31.268401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.444 [2024-07-22 20:46:31.272155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.444 [2024-07-22 20:46:31.281239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.444 [2024-07-22 20:46:31.281919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.444 [2024-07-22 20:46:31.281944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.444 [2024-07-22 20:46:31.281960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.444 [2024-07-22 20:46:31.282199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.444 [2024-07-22 20:46:31.282446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.444 [2024-07-22 20:46:31.282458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.444 [2024-07-22 20:46:31.282467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.444 [2024-07-22 20:46:31.286210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.444 [2024-07-22 20:46:31.295278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.444 [2024-07-22 20:46:31.296031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.444 [2024-07-22 20:46:31.296075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.444 [2024-07-22 20:46:31.296090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.444 [2024-07-22 20:46:31.296369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.444 [2024-07-22 20:46:31.296611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.444 [2024-07-22 20:46:31.296623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.444 [2024-07-22 20:46:31.296634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.444 [2024-07-22 20:46:31.300388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.444 [2024-07-22 20:46:31.309461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.444 [2024-07-22 20:46:31.310089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.444 [2024-07-22 20:46:31.310134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.444 [2024-07-22 20:46:31.310149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.444 [2024-07-22 20:46:31.310427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.444 [2024-07-22 20:46:31.310669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.444 [2024-07-22 20:46:31.310681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.444 [2024-07-22 20:46:31.310692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.444 [2024-07-22 20:46:31.314450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.444 [2024-07-22 20:46:31.323523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.444 [2024-07-22 20:46:31.324283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.444 [2024-07-22 20:46:31.324328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.444 [2024-07-22 20:46:31.324343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.444 [2024-07-22 20:46:31.324612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.444 [2024-07-22 20:46:31.324852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.444 [2024-07-22 20:46:31.324865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.444 [2024-07-22 20:46:31.324875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.444 [2024-07-22 20:46:31.328642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.444 [2024-07-22 20:46:31.337722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.444 [2024-07-22 20:46:31.338476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.444 [2024-07-22 20:46:31.338521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.444 [2024-07-22 20:46:31.338536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.444 [2024-07-22 20:46:31.338806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.444 [2024-07-22 20:46:31.339046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.444 [2024-07-22 20:46:31.339059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.444 [2024-07-22 20:46:31.339069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.444 [2024-07-22 20:46:31.342830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.444 [2024-07-22 20:46:31.351914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.444 [2024-07-22 20:46:31.352656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.444 [2024-07-22 20:46:31.352701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.444 [2024-07-22 20:46:31.352716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.444 [2024-07-22 20:46:31.352985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.444 [2024-07-22 20:46:31.353236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.444 [2024-07-22 20:46:31.353250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.444 [2024-07-22 20:46:31.353260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.444 [2024-07-22 20:46:31.357012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.444 [2024-07-22 20:46:31.366103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.444 [2024-07-22 20:46:31.366834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.445 [2024-07-22 20:46:31.366879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.445 [2024-07-22 20:46:31.366894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.445 [2024-07-22 20:46:31.367163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.445 [2024-07-22 20:46:31.367414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.445 [2024-07-22 20:46:31.367427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.445 [2024-07-22 20:46:31.367438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.445 [2024-07-22 20:46:31.371186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.445 [2024-07-22 20:46:31.380321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.445 [2024-07-22 20:46:31.381070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.445 [2024-07-22 20:46:31.381119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.445 [2024-07-22 20:46:31.381135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.445 [2024-07-22 20:46:31.381414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.445 [2024-07-22 20:46:31.381656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.445 [2024-07-22 20:46:31.381668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.445 [2024-07-22 20:46:31.381678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.445 [2024-07-22 20:46:31.385438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.445 [2024-07-22 20:46:31.394518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.445 [2024-07-22 20:46:31.395266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.445 [2024-07-22 20:46:31.395311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.445 [2024-07-22 20:46:31.395328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.445 [2024-07-22 20:46:31.395597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.445 [2024-07-22 20:46:31.395838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.445 [2024-07-22 20:46:31.395850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.445 [2024-07-22 20:46:31.395861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.445 [2024-07-22 20:46:31.399623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.445 [2024-07-22 20:46:31.408693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.445 [2024-07-22 20:46:31.409466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.445 [2024-07-22 20:46:31.409511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.445 [2024-07-22 20:46:31.409526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.445 [2024-07-22 20:46:31.409795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.445 [2024-07-22 20:46:31.410036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.445 [2024-07-22 20:46:31.410049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.445 [2024-07-22 20:46:31.410059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.445 [2024-07-22 20:46:31.413821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.445 [2024-07-22 20:46:31.422908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.445 [2024-07-22 20:46:31.423631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.445 [2024-07-22 20:46:31.423676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.445 [2024-07-22 20:46:31.423691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.445 [2024-07-22 20:46:31.423960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.445 [2024-07-22 20:46:31.424214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.445 [2024-07-22 20:46:31.424228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.445 [2024-07-22 20:46:31.424239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.445 [2024-07-22 20:46:31.427992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.445 [2024-07-22 20:46:31.437075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.445 [2024-07-22 20:46:31.437875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.445 [2024-07-22 20:46:31.437920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.445 [2024-07-22 20:46:31.437935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.445 [2024-07-22 20:46:31.438214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.445 [2024-07-22 20:46:31.438455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.445 [2024-07-22 20:46:31.438468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.445 [2024-07-22 20:46:31.438479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.445 [2024-07-22 20:46:31.442230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.445 [2024-07-22 20:46:31.451303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.445 [2024-07-22 20:46:31.451867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.445 [2024-07-22 20:46:31.451890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.445 [2024-07-22 20:46:31.451901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.445 [2024-07-22 20:46:31.452139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.445 [2024-07-22 20:46:31.452381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.445 [2024-07-22 20:46:31.452393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.445 [2024-07-22 20:46:31.452403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.445 [2024-07-22 20:46:31.456146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.708 [2024-07-22 20:46:31.465446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.708 [2024-07-22 20:46:31.466094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.708 [2024-07-22 20:46:31.466115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.708 [2024-07-22 20:46:31.466126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.708 [2024-07-22 20:46:31.466369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.708 [2024-07-22 20:46:31.466606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.708 [2024-07-22 20:46:31.466617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.708 [2024-07-22 20:46:31.466626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.708 [2024-07-22 20:46:31.470376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.708 [2024-07-22 20:46:31.479660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.708 [2024-07-22 20:46:31.480111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.708 [2024-07-22 20:46:31.480135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.708 [2024-07-22 20:46:31.480146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.708 [2024-07-22 20:46:31.480397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.708 [2024-07-22 20:46:31.480642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.708 [2024-07-22 20:46:31.480654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.708 [2024-07-22 20:46:31.480663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.708 [2024-07-22 20:46:31.484412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.708 [2024-07-22 20:46:31.493697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.708 [2024-07-22 20:46:31.494426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.708 [2024-07-22 20:46:31.494470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.708 [2024-07-22 20:46:31.494485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.708 [2024-07-22 20:46:31.494754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.708 [2024-07-22 20:46:31.494996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.708 [2024-07-22 20:46:31.495008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.708 [2024-07-22 20:46:31.495018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.708 [2024-07-22 20:46:31.498777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.708 [2024-07-22 20:46:31.507853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.708 [2024-07-22 20:46:31.508618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.708 [2024-07-22 20:46:31.508663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.708 [2024-07-22 20:46:31.508679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.708 [2024-07-22 20:46:31.508948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.708 [2024-07-22 20:46:31.509189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.708 [2024-07-22 20:46:31.509210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.708 [2024-07-22 20:46:31.509221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.708 [2024-07-22 20:46:31.512971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.708 [2024-07-22 20:46:31.522050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.708 [2024-07-22 20:46:31.522798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.708 [2024-07-22 20:46:31.522847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.708 [2024-07-22 20:46:31.522862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.708 [2024-07-22 20:46:31.523132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.708 [2024-07-22 20:46:31.523382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.708 [2024-07-22 20:46:31.523396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.708 [2024-07-22 20:46:31.523406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.708 [2024-07-22 20:46:31.527162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.708 [2024-07-22 20:46:31.536236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.708 [2024-07-22 20:46:31.536989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.708 [2024-07-22 20:46:31.537034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.708 [2024-07-22 20:46:31.537049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.709 [2024-07-22 20:46:31.537328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.709 [2024-07-22 20:46:31.537569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.709 [2024-07-22 20:46:31.537582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.709 [2024-07-22 20:46:31.537592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.709 [2024-07-22 20:46:31.541431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.709 [2024-07-22 20:46:31.550284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.709 [2024-07-22 20:46:31.551020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.709 [2024-07-22 20:46:31.551064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.709 [2024-07-22 20:46:31.551079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.709 [2024-07-22 20:46:31.551357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.709 [2024-07-22 20:46:31.551599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.709 [2024-07-22 20:46:31.551611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.709 [2024-07-22 20:46:31.551622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.709 [2024-07-22 20:46:31.555373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.709 [2024-07-22 20:46:31.564455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.709 [2024-07-22 20:46:31.565120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.709 [2024-07-22 20:46:31.565144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.709 [2024-07-22 20:46:31.565155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.709 [2024-07-22 20:46:31.565400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.709 [2024-07-22 20:46:31.565641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.709 [2024-07-22 20:46:31.565653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.709 [2024-07-22 20:46:31.565662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.709 [2024-07-22 20:46:31.569411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.709 [2024-07-22 20:46:31.578483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.709 [2024-07-22 20:46:31.579179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.709 [2024-07-22 20:46:31.579230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.709 [2024-07-22 20:46:31.579246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.709 [2024-07-22 20:46:31.579515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.709 [2024-07-22 20:46:31.579755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.709 [2024-07-22 20:46:31.579768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.709 [2024-07-22 20:46:31.579779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.709 [2024-07-22 20:46:31.583534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.709 [2024-07-22 20:46:31.592634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.709 [2024-07-22 20:46:31.593294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.709 [2024-07-22 20:46:31.593318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.709 [2024-07-22 20:46:31.593330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.709 [2024-07-22 20:46:31.593568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.709 [2024-07-22 20:46:31.593805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.709 [2024-07-22 20:46:31.593816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.709 [2024-07-22 20:46:31.593825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.709 [2024-07-22 20:46:31.597577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.709 [2024-07-22 20:46:31.606873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.709 [2024-07-22 20:46:31.607495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.709 [2024-07-22 20:46:31.607540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.709 [2024-07-22 20:46:31.607555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.709 [2024-07-22 20:46:31.607824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.709 [2024-07-22 20:46:31.608064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.709 [2024-07-22 20:46:31.608077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.709 [2024-07-22 20:46:31.608092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.709 [2024-07-22 20:46:31.611852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.709 [2024-07-22 20:46:31.620929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.709 [2024-07-22 20:46:31.621634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.709 [2024-07-22 20:46:31.621679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.709 [2024-07-22 20:46:31.621694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.709 [2024-07-22 20:46:31.621964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.709 [2024-07-22 20:46:31.622214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.709 [2024-07-22 20:46:31.622228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.709 [2024-07-22 20:46:31.622239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.709 [2024-07-22 20:46:31.625988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.709 [2024-07-22 20:46:31.635058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.709 [2024-07-22 20:46:31.635684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.709 [2024-07-22 20:46:31.635728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.709 [2024-07-22 20:46:31.635743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.709 [2024-07-22 20:46:31.636012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.709 [2024-07-22 20:46:31.636263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.709 [2024-07-22 20:46:31.636276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.709 [2024-07-22 20:46:31.636287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.709 [2024-07-22 20:46:31.640034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.709 [2024-07-22 20:46:31.649143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.709 [2024-07-22 20:46:31.649833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.709 [2024-07-22 20:46:31.649857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.709 [2024-07-22 20:46:31.649869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.709 [2024-07-22 20:46:31.650106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.709 [2024-07-22 20:46:31.650347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.709 [2024-07-22 20:46:31.650360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.709 [2024-07-22 20:46:31.650369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.709 [2024-07-22 20:46:31.654111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.709 [2024-07-22 20:46:31.663188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.709 [2024-07-22 20:46:31.663938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.709 [2024-07-22 20:46:31.663987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.709 [2024-07-22 20:46:31.664003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.709 [2024-07-22 20:46:31.664282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.709 [2024-07-22 20:46:31.664524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.709 [2024-07-22 20:46:31.664536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.709 [2024-07-22 20:46:31.664547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.709 [2024-07-22 20:46:31.668301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.709 [2024-07-22 20:46:31.677383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.709 [2024-07-22 20:46:31.678107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.709 [2024-07-22 20:46:31.678152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.709 [2024-07-22 20:46:31.678167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.709 [2024-07-22 20:46:31.678445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.710 [2024-07-22 20:46:31.678687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.710 [2024-07-22 20:46:31.678700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.710 [2024-07-22 20:46:31.678719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.710 [2024-07-22 20:46:31.682476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.710 [2024-07-22 20:46:31.691548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.710 [2024-07-22 20:46:31.692284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.710 [2024-07-22 20:46:31.692329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.710 [2024-07-22 20:46:31.692346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.710 [2024-07-22 20:46:31.692617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.710 [2024-07-22 20:46:31.692858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.710 [2024-07-22 20:46:31.692871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.710 [2024-07-22 20:46:31.692881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.710 [2024-07-22 20:46:31.696641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.710 [2024-07-22 20:46:31.705715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.710 [2024-07-22 20:46:31.706521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.710 [2024-07-22 20:46:31.706566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.710 [2024-07-22 20:46:31.706581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.710 [2024-07-22 20:46:31.706854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.710 [2024-07-22 20:46:31.707096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.710 [2024-07-22 20:46:31.707109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.710 [2024-07-22 20:46:31.707119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.710 [2024-07-22 20:46:31.710878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.710 [2024-07-22 20:46:31.719949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.710 [2024-07-22 20:46:31.720707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.710 [2024-07-22 20:46:31.720752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.710 [2024-07-22 20:46:31.720767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.710 [2024-07-22 20:46:31.721036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.710 [2024-07-22 20:46:31.721287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.710 [2024-07-22 20:46:31.721301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.710 [2024-07-22 20:46:31.721311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.710 [2024-07-22 20:46:31.725075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.972 [2024-07-22 20:46:31.734157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.972 [2024-07-22 20:46:31.734914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.972 [2024-07-22 20:46:31.734959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.972 [2024-07-22 20:46:31.734974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.972 [2024-07-22 20:46:31.735253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.972 [2024-07-22 20:46:31.735494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.972 [2024-07-22 20:46:31.735507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.972 [2024-07-22 20:46:31.735517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.972 [2024-07-22 20:46:31.739274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.972 [2024-07-22 20:46:31.748553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.972 [2024-07-22 20:46:31.749224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.972 [2024-07-22 20:46:31.749248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.972 [2024-07-22 20:46:31.749259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.972 [2024-07-22 20:46:31.749498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.972 [2024-07-22 20:46:31.749734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.972 [2024-07-22 20:46:31.749745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.972 [2024-07-22 20:46:31.749759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.972 [2024-07-22 20:46:31.753509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.972 [2024-07-22 20:46:31.762577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.972 [2024-07-22 20:46:31.763274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.972 [2024-07-22 20:46:31.763319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.972 [2024-07-22 20:46:31.763336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.972 [2024-07-22 20:46:31.763608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.972 [2024-07-22 20:46:31.763848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.972 [2024-07-22 20:46:31.763861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.972 [2024-07-22 20:46:31.763871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.972 [2024-07-22 20:46:31.767631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.972 [2024-07-22 20:46:31.776714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.972 [2024-07-22 20:46:31.777492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.972 [2024-07-22 20:46:31.777537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.972 [2024-07-22 20:46:31.777553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.972 [2024-07-22 20:46:31.777822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.972 [2024-07-22 20:46:31.778063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.972 [2024-07-22 20:46:31.778076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.972 [2024-07-22 20:46:31.778086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.972 [2024-07-22 20:46:31.781843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.972 [2024-07-22 20:46:31.790927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.972 [2024-07-22 20:46:31.791618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.972 [2024-07-22 20:46:31.791642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.972 [2024-07-22 20:46:31.791654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.972 [2024-07-22 20:46:31.791891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.972 [2024-07-22 20:46:31.792126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.973 [2024-07-22 20:46:31.792138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.973 [2024-07-22 20:46:31.792147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.973 [2024-07-22 20:46:31.795920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.973 [2024-07-22 20:46:31.804998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.973 [2024-07-22 20:46:31.805753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.973 [2024-07-22 20:46:31.805797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.973 [2024-07-22 20:46:31.805812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.973 [2024-07-22 20:46:31.806082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.973 [2024-07-22 20:46:31.806332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.973 [2024-07-22 20:46:31.806346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.973 [2024-07-22 20:46:31.806357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.973 [2024-07-22 20:46:31.810106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.973 [2024-07-22 20:46:31.819189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.973 [2024-07-22 20:46:31.819957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.973 [2024-07-22 20:46:31.820002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.973 [2024-07-22 20:46:31.820017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.973 [2024-07-22 20:46:31.820297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.973 [2024-07-22 20:46:31.820538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.973 [2024-07-22 20:46:31.820551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.973 [2024-07-22 20:46:31.820561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.973 [2024-07-22 20:46:31.824399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.973 [2024-07-22 20:46:31.833271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.973 [2024-07-22 20:46:31.834021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.973 [2024-07-22 20:46:31.834067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.973 [2024-07-22 20:46:31.834081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.973 [2024-07-22 20:46:31.834360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.973 [2024-07-22 20:46:31.834602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.973 [2024-07-22 20:46:31.834614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.973 [2024-07-22 20:46:31.834625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.973 [2024-07-22 20:46:31.838384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.973 [2024-07-22 20:46:31.847472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.973 [2024-07-22 20:46:31.848181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.973 [2024-07-22 20:46:31.848234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.973 [2024-07-22 20:46:31.848250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.973 [2024-07-22 20:46:31.848525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.973 [2024-07-22 20:46:31.848767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.973 [2024-07-22 20:46:31.848780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.973 [2024-07-22 20:46:31.848790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.973 [2024-07-22 20:46:31.852557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.973 [2024-07-22 20:46:31.861656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.973 [2024-07-22 20:46:31.862343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.973 [2024-07-22 20:46:31.862368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.973 [2024-07-22 20:46:31.862380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.973 [2024-07-22 20:46:31.862618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.973 [2024-07-22 20:46:31.862854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.973 [2024-07-22 20:46:31.862865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.973 [2024-07-22 20:46:31.862875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.973 [2024-07-22 20:46:31.866636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.973 [2024-07-22 20:46:31.875720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.973 [2024-07-22 20:46:31.876450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.973 [2024-07-22 20:46:31.876496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.973 [2024-07-22 20:46:31.876512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.973 [2024-07-22 20:46:31.876781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.973 [2024-07-22 20:46:31.877021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.973 [2024-07-22 20:46:31.877034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.973 [2024-07-22 20:46:31.877044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.973 [2024-07-22 20:46:31.880818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.973 [2024-07-22 20:46:31.889952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.973 [2024-07-22 20:46:31.890591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.973 [2024-07-22 20:46:31.890615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.973 [2024-07-22 20:46:31.890627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.973 [2024-07-22 20:46:31.890864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.973 [2024-07-22 20:46:31.891100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.973 [2024-07-22 20:46:31.891111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.973 [2024-07-22 20:46:31.891124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.973 [2024-07-22 20:46:31.894881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.973 [2024-07-22 20:46:31.904188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.973 [2024-07-22 20:46:31.904889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.973 [2024-07-22 20:46:31.904934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.973 [2024-07-22 20:46:31.904950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.973 [2024-07-22 20:46:31.905229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.973 [2024-07-22 20:46:31.905471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.973 [2024-07-22 20:46:31.905484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.973 [2024-07-22 20:46:31.905494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.973 [2024-07-22 20:46:31.909256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.973 [2024-07-22 20:46:31.918346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.973 [2024-07-22 20:46:31.919001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.973 [2024-07-22 20:46:31.919025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.973 [2024-07-22 20:46:31.919036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.973 [2024-07-22 20:46:31.919281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.973 [2024-07-22 20:46:31.919518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.973 [2024-07-22 20:46:31.919529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.973 [2024-07-22 20:46:31.919539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.973 [2024-07-22 20:46:31.923295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.973 [2024-07-22 20:46:31.932392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.973 [2024-07-22 20:46:31.933013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.974 [2024-07-22 20:46:31.933035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.974 [2024-07-22 20:46:31.933046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.974 [2024-07-22 20:46:31.933288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.974 [2024-07-22 20:46:31.933524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.974 [2024-07-22 20:46:31.933536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.974 [2024-07-22 20:46:31.933545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.974 [2024-07-22 20:46:31.937307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.974 [2024-07-22 20:46:31.946606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.974 [2024-07-22 20:46:31.947258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.974 [2024-07-22 20:46:31.947280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.974 [2024-07-22 20:46:31.947291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.974 [2024-07-22 20:46:31.947527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.974 [2024-07-22 20:46:31.947763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.974 [2024-07-22 20:46:31.947773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.974 [2024-07-22 20:46:31.947783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.974 [2024-07-22 20:46:31.951550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.974 [2024-07-22 20:46:31.960644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.974 [2024-07-22 20:46:31.961315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.974 [2024-07-22 20:46:31.961361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.974 [2024-07-22 20:46:31.961377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.974 [2024-07-22 20:46:31.961648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.974 [2024-07-22 20:46:31.961889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.974 [2024-07-22 20:46:31.961902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.974 [2024-07-22 20:46:31.961913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.974 [2024-07-22 20:46:31.965683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.974 [2024-07-22 20:46:31.974771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.974 [2024-07-22 20:46:31.975491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.974 [2024-07-22 20:46:31.975537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.974 [2024-07-22 20:46:31.975552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.974 [2024-07-22 20:46:31.975821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.974 [2024-07-22 20:46:31.976062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.974 [2024-07-22 20:46:31.976074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.974 [2024-07-22 20:46:31.976085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:19.974 [2024-07-22 20:46:31.979852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:19.974 [2024-07-22 20:46:31.988949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:19.974 [2024-07-22 20:46:31.989668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:19.974 [2024-07-22 20:46:31.989713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:19.974 [2024-07-22 20:46:31.989728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:19.974 [2024-07-22 20:46:31.990002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:19.974 [2024-07-22 20:46:31.990250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:19.974 [2024-07-22 20:46:31.990264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:19.974 [2024-07-22 20:46:31.990275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.237 [2024-07-22 20:46:31.994031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.237 [2024-07-22 20:46:32.003145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.237 [2024-07-22 20:46:32.003910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.237 [2024-07-22 20:46:32.003955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.237 [2024-07-22 20:46:32.003970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.237 [2024-07-22 20:46:32.004249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.237 [2024-07-22 20:46:32.004490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.237 [2024-07-22 20:46:32.004502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.237 [2024-07-22 20:46:32.004513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.237 [2024-07-22 20:46:32.008267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.237 [2024-07-22 20:46:32.017357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.237 [2024-07-22 20:46:32.017907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.237 [2024-07-22 20:46:32.017931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.237 [2024-07-22 20:46:32.017942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.237 [2024-07-22 20:46:32.018180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.237 [2024-07-22 20:46:32.018423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.237 [2024-07-22 20:46:32.018435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.237 [2024-07-22 20:46:32.018444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.237 [2024-07-22 20:46:32.022199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.237 [2024-07-22 20:46:32.031498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.237 [2024-07-22 20:46:32.032024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.237 [2024-07-22 20:46:32.032046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.237 [2024-07-22 20:46:32.032057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.237 [2024-07-22 20:46:32.032300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.237 [2024-07-22 20:46:32.032537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.237 [2024-07-22 20:46:32.032548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.237 [2024-07-22 20:46:32.032562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.237 [2024-07-22 20:46:32.036310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.237 [2024-07-22 20:46:32.045608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.237 [2024-07-22 20:46:32.046313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.237 [2024-07-22 20:46:32.046359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.237 [2024-07-22 20:46:32.046375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.237 [2024-07-22 20:46:32.046644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.237 [2024-07-22 20:46:32.046885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.237 [2024-07-22 20:46:32.046897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.237 [2024-07-22 20:46:32.046908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.237 [2024-07-22 20:46:32.050667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.237 [2024-07-22 20:46:32.059760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.237 [2024-07-22 20:46:32.060387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.237 [2024-07-22 20:46:32.060411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.237 [2024-07-22 20:46:32.060422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.237 [2024-07-22 20:46:32.060660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.237 [2024-07-22 20:46:32.060896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.237 [2024-07-22 20:46:32.060908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.237 [2024-07-22 20:46:32.060918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.237 [2024-07-22 20:46:32.064664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.237 [2024-07-22 20:46:32.073955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.237 [2024-07-22 20:46:32.074691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.237 [2024-07-22 20:46:32.074736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.237 [2024-07-22 20:46:32.074752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.237 [2024-07-22 20:46:32.075021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.237 [2024-07-22 20:46:32.075269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.237 [2024-07-22 20:46:32.075283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.237 [2024-07-22 20:46:32.075293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.237 [2024-07-22 20:46:32.079044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.237 [2024-07-22 20:46:32.088128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.237 [2024-07-22 20:46:32.088859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.237 [2024-07-22 20:46:32.088912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.237 [2024-07-22 20:46:32.088927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.237 [2024-07-22 20:46:32.089195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.237 [2024-07-22 20:46:32.089446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.237 [2024-07-22 20:46:32.089458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.237 [2024-07-22 20:46:32.089469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.237 [2024-07-22 20:46:32.093226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.237 [2024-07-22 20:46:32.102301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.237 [2024-07-22 20:46:32.102922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.237 [2024-07-22 20:46:32.102946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.237 [2024-07-22 20:46:32.102957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.237 [2024-07-22 20:46:32.103194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.237 [2024-07-22 20:46:32.103438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.237 [2024-07-22 20:46:32.103450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.237 [2024-07-22 20:46:32.103460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.237 [2024-07-22 20:46:32.107210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.237 [2024-07-22 20:46:32.116494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.237 [2024-07-22 20:46:32.117109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.237 [2024-07-22 20:46:32.117131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.237 [2024-07-22 20:46:32.117142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.237 [2024-07-22 20:46:32.117383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.237 [2024-07-22 20:46:32.117620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.237 [2024-07-22 20:46:32.117631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.237 [2024-07-22 20:46:32.117640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.237 [2024-07-22 20:46:32.121386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.237 [2024-07-22 20:46:32.130667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.238 [2024-07-22 20:46:32.131312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.238 [2024-07-22 20:46:32.131358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.238 [2024-07-22 20:46:32.131375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.238 [2024-07-22 20:46:32.131651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.238 [2024-07-22 20:46:32.131891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.238 [2024-07-22 20:46:32.131904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.238 [2024-07-22 20:46:32.131914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.238 [2024-07-22 20:46:32.135678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.238 [2024-07-22 20:46:32.144765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.238 [2024-07-22 20:46:32.145523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.238 [2024-07-22 20:46:32.145568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.238 [2024-07-22 20:46:32.145583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.238 [2024-07-22 20:46:32.145852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.238 [2024-07-22 20:46:32.146092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.238 [2024-07-22 20:46:32.146105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.238 [2024-07-22 20:46:32.146116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.238 [2024-07-22 20:46:32.149876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.238 [2024-07-22 20:46:32.158960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.238 [2024-07-22 20:46:32.159628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.238 [2024-07-22 20:46:32.159653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.238 [2024-07-22 20:46:32.159665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.238 [2024-07-22 20:46:32.159903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.238 [2024-07-22 20:46:32.160139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.238 [2024-07-22 20:46:32.160151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.238 [2024-07-22 20:46:32.160160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.238 [2024-07-22 20:46:32.163913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.238 [2024-07-22 20:46:32.173203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.238 [2024-07-22 20:46:32.173816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.238 [2024-07-22 20:46:32.173838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.238 [2024-07-22 20:46:32.173849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.238 [2024-07-22 20:46:32.174085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.238 [2024-07-22 20:46:32.174326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.238 [2024-07-22 20:46:32.174338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.238 [2024-07-22 20:46:32.174352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.238 [2024-07-22 20:46:32.178096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.238 [2024-07-22 20:46:32.187385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.238 [2024-07-22 20:46:32.187991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.238 [2024-07-22 20:46:32.188012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.238 [2024-07-22 20:46:32.188023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.238 [2024-07-22 20:46:32.188265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.238 [2024-07-22 20:46:32.188501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.238 [2024-07-22 20:46:32.188512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.238 [2024-07-22 20:46:32.188521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.238 [2024-07-22 20:46:32.192267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.238 [2024-07-22 20:46:32.201559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.238 [2024-07-22 20:46:32.202257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.238 [2024-07-22 20:46:32.202280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.238 [2024-07-22 20:46:32.202292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.238 [2024-07-22 20:46:32.202527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.238 [2024-07-22 20:46:32.202763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.238 [2024-07-22 20:46:32.202773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.238 [2024-07-22 20:46:32.202783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.238 [2024-07-22 20:46:32.206527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.238 [2024-07-22 20:46:32.215654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.238 [2024-07-22 20:46:32.216311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.238 [2024-07-22 20:46:32.216333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.238 [2024-07-22 20:46:32.216344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.238 [2024-07-22 20:46:32.216580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.238 [2024-07-22 20:46:32.216816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.238 [2024-07-22 20:46:32.216827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.238 [2024-07-22 20:46:32.216836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.238 [2024-07-22 20:46:32.220582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.238 [2024-07-22 20:46:32.229866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.238 [2024-07-22 20:46:32.230581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.238 [2024-07-22 20:46:32.230626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.238 [2024-07-22 20:46:32.230641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.238 [2024-07-22 20:46:32.230911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.238 [2024-07-22 20:46:32.231152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.238 [2024-07-22 20:46:32.231165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.238 [2024-07-22 20:46:32.231175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.238 [2024-07-22 20:46:32.234933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.238 [2024-07-22 20:46:32.244008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.238 [2024-07-22 20:46:32.244678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.238 [2024-07-22 20:46:32.244703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.238 [2024-07-22 20:46:32.244714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.238 [2024-07-22 20:46:32.244951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.238 [2024-07-22 20:46:32.245187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.238 [2024-07-22 20:46:32.245198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.238 [2024-07-22 20:46:32.245213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.238 [2024-07-22 20:46:32.248955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.500 [2024-07-22 20:46:32.258032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.500 [2024-07-22 20:46:32.258775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.500 [2024-07-22 20:46:32.258820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.500 [2024-07-22 20:46:32.258836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.500 [2024-07-22 20:46:32.259105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.500 [2024-07-22 20:46:32.259363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.500 [2024-07-22 20:46:32.259377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.501 [2024-07-22 20:46:32.259388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.501 [2024-07-22 20:46:32.263141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.501 [2024-07-22 20:46:32.272220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.501 [2024-07-22 20:46:32.272961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.501 [2024-07-22 20:46:32.273006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.501 [2024-07-22 20:46:32.273021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.501 [2024-07-22 20:46:32.273302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.501 [2024-07-22 20:46:32.273544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.501 [2024-07-22 20:46:32.273556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.501 [2024-07-22 20:46:32.273567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.501 [2024-07-22 20:46:32.277323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.501 [2024-07-22 20:46:32.286412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.501 [2024-07-22 20:46:32.287067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.501 [2024-07-22 20:46:32.287091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.501 [2024-07-22 20:46:32.287102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.501 [2024-07-22 20:46:32.287354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.501 [2024-07-22 20:46:32.287591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.501 [2024-07-22 20:46:32.287602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.501 [2024-07-22 20:46:32.287611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.501 [2024-07-22 20:46:32.291359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.501 [2024-07-22 20:46:32.300442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.501 [2024-07-22 20:46:32.300999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.501 [2024-07-22 20:46:32.301021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.501 [2024-07-22 20:46:32.301032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.501 [2024-07-22 20:46:32.301275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.501 [2024-07-22 20:46:32.301511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.501 [2024-07-22 20:46:32.301522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.501 [2024-07-22 20:46:32.301532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.501 [2024-07-22 20:46:32.305282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.501 [2024-07-22 20:46:32.314577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.501 [2024-07-22 20:46:32.315189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.501 [2024-07-22 20:46:32.315216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.501 [2024-07-22 20:46:32.315227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.501 [2024-07-22 20:46:32.315464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.501 [2024-07-22 20:46:32.315699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.501 [2024-07-22 20:46:32.315714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.501 [2024-07-22 20:46:32.315723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.501 [2024-07-22 20:46:32.319477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.501 [2024-07-22 20:46:32.328774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.501 [2024-07-22 20:46:32.329423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.501 [2024-07-22 20:46:32.329446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.501 [2024-07-22 20:46:32.329456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.501 [2024-07-22 20:46:32.329692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.501 [2024-07-22 20:46:32.329928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.501 [2024-07-22 20:46:32.329938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.501 [2024-07-22 20:46:32.329948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.501 [2024-07-22 20:46:32.333700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.501 [2024-07-22 20:46:32.343000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.501 [2024-07-22 20:46:32.343731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.501 [2024-07-22 20:46:32.343776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.501 [2024-07-22 20:46:32.343791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.501 [2024-07-22 20:46:32.344061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.501 [2024-07-22 20:46:32.344310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.501 [2024-07-22 20:46:32.344324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.501 [2024-07-22 20:46:32.344335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.501 [2024-07-22 20:46:32.348094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.501 [2024-07-22 20:46:32.357181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.501 [2024-07-22 20:46:32.357835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.501 [2024-07-22 20:46:32.357859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.501 [2024-07-22 20:46:32.357871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.501 [2024-07-22 20:46:32.358108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.501 [2024-07-22 20:46:32.358350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.501 [2024-07-22 20:46:32.358362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.501 [2024-07-22 20:46:32.358371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.501 [2024-07-22 20:46:32.362135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.501 [2024-07-22 20:46:32.371226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.501 [2024-07-22 20:46:32.371748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.501 [2024-07-22 20:46:32.371792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.501 [2024-07-22 20:46:32.371807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.501 [2024-07-22 20:46:32.372076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.501 [2024-07-22 20:46:32.372324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.501 [2024-07-22 20:46:32.372337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.501 [2024-07-22 20:46:32.372348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.501 [2024-07-22 20:46:32.376110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.501 [2024-07-22 20:46:32.385432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.501 [2024-07-22 20:46:32.386088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.501 [2024-07-22 20:46:32.386111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.501 [2024-07-22 20:46:32.386123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.501 [2024-07-22 20:46:32.386367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.501 [2024-07-22 20:46:32.386604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.501 [2024-07-22 20:46:32.386615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.501 [2024-07-22 20:46:32.386624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.501 [2024-07-22 20:46:32.390373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.501 [2024-07-22 20:46:32.399684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.501 [2024-07-22 20:46:32.400437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.501 [2024-07-22 20:46:32.400481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.501 [2024-07-22 20:46:32.400496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.501 [2024-07-22 20:46:32.400766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.501 [2024-07-22 20:46:32.401007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.501 [2024-07-22 20:46:32.401019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.501 [2024-07-22 20:46:32.401029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.502 [2024-07-22 20:46:32.404798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.502 [2024-07-22 20:46:32.413893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.502 [2024-07-22 20:46:32.414530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.502 [2024-07-22 20:46:32.414555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.502 [2024-07-22 20:46:32.414572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.502 [2024-07-22 20:46:32.414810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.502 [2024-07-22 20:46:32.415046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.502 [2024-07-22 20:46:32.415057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.502 [2024-07-22 20:46:32.415067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.502 [2024-07-22 20:46:32.418861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.502 [2024-07-22 20:46:32.427954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.502 [2024-07-22 20:46:32.428578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.502 [2024-07-22 20:46:32.428601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.502 [2024-07-22 20:46:32.428612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.502 [2024-07-22 20:46:32.428849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.502 [2024-07-22 20:46:32.429085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.502 [2024-07-22 20:46:32.429096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.502 [2024-07-22 20:46:32.429105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.502 [2024-07-22 20:46:32.432862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.502 [2024-07-22 20:46:32.442158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.502 [2024-07-22 20:46:32.442703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.502 [2024-07-22 20:46:32.442724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.502 [2024-07-22 20:46:32.442735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.502 [2024-07-22 20:46:32.442971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.502 [2024-07-22 20:46:32.443212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.502 [2024-07-22 20:46:32.443223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.502 [2024-07-22 20:46:32.443233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.502 [2024-07-22 20:46:32.446978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.502 [2024-07-22 20:46:32.456276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.502 [2024-07-22 20:46:32.456886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.502 [2024-07-22 20:46:32.456907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.502 [2024-07-22 20:46:32.456918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.502 [2024-07-22 20:46:32.457154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.502 [2024-07-22 20:46:32.457394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.502 [2024-07-22 20:46:32.457410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.502 [2024-07-22 20:46:32.457419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.502 [2024-07-22 20:46:32.461183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3894294 Killed "${NVMF_APP[@]}" "$@" 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:20.502 [2024-07-22 20:46:32.470307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.502 [2024-07-22 20:46:32.470841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.502 [2024-07-22 20:46:32.470862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.502 [2024-07-22 20:46:32.470873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.502 [2024-07-22 20:46:32.471109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3896253 00:39:20.502 [2024-07-22 20:46:32.471350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.502 [2024-07-22 20:46:32.471362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.502 [2024-07-22 20:46:32.471371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3896253 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3896253 ']' 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:20.502 20:46:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:20.502 [2024-07-22 20:46:32.475119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.502 [2024-07-22 20:46:32.484434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.502 [2024-07-22 20:46:32.485035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.502 [2024-07-22 20:46:32.485058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.502 [2024-07-22 20:46:32.485069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.502 [2024-07-22 20:46:32.485311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.502 [2024-07-22 20:46:32.485551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.502 [2024-07-22 20:46:32.485570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.502 [2024-07-22 20:46:32.485580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.502 [2024-07-22 20:46:32.489338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.502 [2024-07-22 20:46:32.498642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.502 [2024-07-22 20:46:32.499132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.502 [2024-07-22 20:46:32.499155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.502 [2024-07-22 20:46:32.499165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.502 [2024-07-22 20:46:32.499408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.502 [2024-07-22 20:46:32.499645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.502 [2024-07-22 20:46:32.499656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.502 [2024-07-22 20:46:32.499665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.502 [2024-07-22 20:46:32.503419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.502 [2024-07-22 20:46:32.512732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.502 [2024-07-22 20:46:32.513360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.502 [2024-07-22 20:46:32.513383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.502 [2024-07-22 20:46:32.513394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.502 [2024-07-22 20:46:32.513631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.502 [2024-07-22 20:46:32.513867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.502 [2024-07-22 20:46:32.513879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.502 [2024-07-22 20:46:32.513888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.502 [2024-07-22 20:46:32.517646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.765 [2024-07-22 20:46:32.526975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.765 [2024-07-22 20:46:32.527617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.765 [2024-07-22 20:46:32.527639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.765 [2024-07-22 20:46:32.527650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.765 [2024-07-22 20:46:32.527888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.765 [2024-07-22 20:46:32.528125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.765 [2024-07-22 20:46:32.528136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.765 [2024-07-22 20:46:32.528145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.765 [2024-07-22 20:46:32.531915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.765 [2024-07-22 20:46:32.541020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.765 [2024-07-22 20:46:32.541742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.765 [2024-07-22 20:46:32.541790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.765 [2024-07-22 20:46:32.541806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.765 [2024-07-22 20:46:32.542082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.765 [2024-07-22 20:46:32.542337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.765 [2024-07-22 20:46:32.542352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.765 [2024-07-22 20:46:32.542363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.765 [2024-07-22 20:46:32.546125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.765 [2024-07-22 20:46:32.553023] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:20.765 [2024-07-22 20:46:32.553115] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.765 [2024-07-22 20:46:32.555234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.765 [2024-07-22 20:46:32.555902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.765 [2024-07-22 20:46:32.555949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.765 [2024-07-22 20:46:32.555967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.765 [2024-07-22 20:46:32.556246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.765 [2024-07-22 20:46:32.556490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.765 [2024-07-22 20:46:32.556503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.765 [2024-07-22 20:46:32.556514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.765 [2024-07-22 20:46:32.560296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.765 [2024-07-22 20:46:32.569403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.765 [2024-07-22 20:46:32.570162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.765 [2024-07-22 20:46:32.570214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.765 [2024-07-22 20:46:32.570232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.765 [2024-07-22 20:46:32.570502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.765 [2024-07-22 20:46:32.570744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.765 [2024-07-22 20:46:32.570758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.765 [2024-07-22 20:46:32.570769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.765 [2024-07-22 20:46:32.574633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.765 [2024-07-22 20:46:32.583522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.765 [2024-07-22 20:46:32.584294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.765 [2024-07-22 20:46:32.584340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.765 [2024-07-22 20:46:32.584357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.765 [2024-07-22 20:46:32.584630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.765 [2024-07-22 20:46:32.584871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.765 [2024-07-22 20:46:32.584884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.765 [2024-07-22 20:46:32.584894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.765 [2024-07-22 20:46:32.588664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.765 [2024-07-22 20:46:32.597769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.765 [2024-07-22 20:46:32.598519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.765 [2024-07-22 20:46:32.598564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.765 [2024-07-22 20:46:32.598580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.765 [2024-07-22 20:46:32.598850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.765 [2024-07-22 20:46:32.599093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.765 [2024-07-22 20:46:32.599106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.765 [2024-07-22 20:46:32.599117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.765 [2024-07-22 20:46:32.602881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.765 [2024-07-22 20:46:32.611999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.765 [2024-07-22 20:46:32.612642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.765 [2024-07-22 20:46:32.612688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.765 [2024-07-22 20:46:32.612702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.765 [2024-07-22 20:46:32.612974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.765 [2024-07-22 20:46:32.613224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.765 [2024-07-22 20:46:32.613238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.766 [2024-07-22 20:46:32.613249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.766 [2024-07-22 20:46:32.617008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.766 EAL: No free 2048 kB hugepages reported on node 1 00:39:20.766 [2024-07-22 20:46:32.626164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.766 [2024-07-22 20:46:32.626891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.766 [2024-07-22 20:46:32.626936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.766 [2024-07-22 20:46:32.626960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.766 [2024-07-22 20:46:32.627239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.766 [2024-07-22 20:46:32.627482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.766 [2024-07-22 20:46:32.627495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.766 [2024-07-22 20:46:32.627506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.766 [2024-07-22 20:46:32.631268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.766 [2024-07-22 20:46:32.640356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.766 [2024-07-22 20:46:32.641011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.766 [2024-07-22 20:46:32.641034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.766 [2024-07-22 20:46:32.641046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.766 [2024-07-22 20:46:32.641291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.766 [2024-07-22 20:46:32.641532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.766 [2024-07-22 20:46:32.641544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.766 [2024-07-22 20:46:32.641553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.766 [2024-07-22 20:46:32.645312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.766 [2024-07-22 20:46:32.654410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.766 [2024-07-22 20:46:32.655075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.766 [2024-07-22 20:46:32.655097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.766 [2024-07-22 20:46:32.655109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.766 [2024-07-22 20:46:32.655353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.766 [2024-07-22 20:46:32.655590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.766 [2024-07-22 20:46:32.655602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.766 [2024-07-22 20:46:32.655611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.766 [2024-07-22 20:46:32.659374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.766 [2024-07-22 20:46:32.668462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.766 [2024-07-22 20:46:32.669115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.766 [2024-07-22 20:46:32.669137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.766 [2024-07-22 20:46:32.669148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.766 [2024-07-22 20:46:32.669391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.766 [2024-07-22 20:46:32.669633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.766 [2024-07-22 20:46:32.669644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.766 [2024-07-22 20:46:32.669654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.766 [2024-07-22 20:46:32.673406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.766 [2024-07-22 20:46:32.682498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.766 [2024-07-22 20:46:32.683116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.766 [2024-07-22 20:46:32.683137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.766 [2024-07-22 20:46:32.683148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.766 [2024-07-22 20:46:32.683389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.766 [2024-07-22 20:46:32.683626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.766 [2024-07-22 20:46:32.683637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.766 [2024-07-22 20:46:32.683646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.766 [2024-07-22 20:46:32.687369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:20.766 [2024-07-22 20:46:32.687409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.766 [2024-07-22 20:46:32.696708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.766 [2024-07-22 20:46:32.697440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.766 [2024-07-22 20:46:32.697486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.766 [2024-07-22 20:46:32.697502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.766 [2024-07-22 20:46:32.697773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.766 [2024-07-22 20:46:32.698015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.766 [2024-07-22 20:46:32.698028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.766 [2024-07-22 20:46:32.698038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.766 [2024-07-22 20:46:32.701803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.766 [2024-07-22 20:46:32.710893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.766 [2024-07-22 20:46:32.711647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.766 [2024-07-22 20:46:32.711692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.766 [2024-07-22 20:46:32.711708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.767 [2024-07-22 20:46:32.711978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.767 [2024-07-22 20:46:32.712234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.767 [2024-07-22 20:46:32.712248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.767 [2024-07-22 20:46:32.712263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.767 [2024-07-22 20:46:32.716020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.767 [2024-07-22 20:46:32.725104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.767 [2024-07-22 20:46:32.725832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.767 [2024-07-22 20:46:32.725877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.767 [2024-07-22 20:46:32.725893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.767 [2024-07-22 20:46:32.726163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.767 [2024-07-22 20:46:32.726415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.767 [2024-07-22 20:46:32.726429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.767 [2024-07-22 20:46:32.726440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.767 [2024-07-22 20:46:32.730196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.767 [2024-07-22 20:46:32.739274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.767 [2024-07-22 20:46:32.740077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.767 [2024-07-22 20:46:32.740122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.767 [2024-07-22 20:46:32.740137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.767 [2024-07-22 20:46:32.740416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.767 [2024-07-22 20:46:32.740660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.767 [2024-07-22 20:46:32.740673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.767 [2024-07-22 20:46:32.740684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.767 [2024-07-22 20:46:32.744442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.767 [2024-07-22 20:46:32.753306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.767 [2024-07-22 20:46:32.753970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.767 [2024-07-22 20:46:32.753995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.767 [2024-07-22 20:46:32.754006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.767 [2024-07-22 20:46:32.754251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.767 [2024-07-22 20:46:32.754490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.767 [2024-07-22 20:46:32.754502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.767 [2024-07-22 20:46:32.754512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.767 [2024-07-22 20:46:32.758262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.767 [2024-07-22 20:46:32.767361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.767 [2024-07-22 20:46:32.768042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.767 [2024-07-22 20:46:32.768065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.767 [2024-07-22 20:46:32.768076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.767 [2024-07-22 20:46:32.768318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.767 [2024-07-22 20:46:32.768556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.767 [2024-07-22 20:46:32.768568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.767 [2024-07-22 20:46:32.768577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:20.767 [2024-07-22 20:46:32.772328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:20.767 [2024-07-22 20:46:32.781409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:20.767 [2024-07-22 20:46:32.782026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.767 [2024-07-22 20:46:32.782048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:20.767 [2024-07-22 20:46:32.782058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:20.767 [2024-07-22 20:46:32.782302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:20.767 [2024-07-22 20:46:32.782540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:20.767 [2024-07-22 20:46:32.782551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:20.767 [2024-07-22 20:46:32.782561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.029 [2024-07-22 20:46:32.786366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.029 [2024-07-22 20:46:32.795441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.029 [2024-07-22 20:46:32.796131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.029 [2024-07-22 20:46:32.796154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.029 [2024-07-22 20:46:32.796166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.029 [2024-07-22 20:46:32.796410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.029 [2024-07-22 20:46:32.796647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.029 [2024-07-22 20:46:32.796659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.029 [2024-07-22 20:46:32.796668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.029 [2024-07-22 20:46:32.800419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.029 [2024-07-22 20:46:32.809489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.029 [2024-07-22 20:46:32.810150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.029 [2024-07-22 20:46:32.810171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.029 [2024-07-22 20:46:32.810182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.029 [2024-07-22 20:46:32.810429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.029 [2024-07-22 20:46:32.810666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.030 [2024-07-22 20:46:32.810677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.030 [2024-07-22 20:46:32.810686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.030 [2024-07-22 20:46:32.814436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.030 [2024-07-22 20:46:32.823519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.030 [2024-07-22 20:46:32.824156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.030 [2024-07-22 20:46:32.824178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.030 [2024-07-22 20:46:32.824188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.030 [2024-07-22 20:46:32.824429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.030 [2024-07-22 20:46:32.824666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.030 [2024-07-22 20:46:32.824677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.030 [2024-07-22 20:46:32.824686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.030 [2024-07-22 20:46:32.826403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:21.030 [2024-07-22 20:46:32.826430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:21.030 [2024-07-22 20:46:32.826439] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:21.030 [2024-07-22 20:46:32.826446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:21.030 [2024-07-22 20:46:32.826454] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:21.030 [2024-07-22 20:46:32.826625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:21.030 [2024-07-22 20:46:32.826738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:21.030 [2024-07-22 20:46:32.826766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:21.030 [2024-07-22 20:46:32.828445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.030 [2024-07-22 20:46:32.837586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.030 [2024-07-22 20:46:32.838231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.030 [2024-07-22 20:46:32.838254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.030 [2024-07-22 20:46:32.838266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.030 [2024-07-22 20:46:32.838503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.030 [2024-07-22 20:46:32.838741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.030 [2024-07-22 20:46:32.838752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.030 [2024-07-22 20:46:32.838761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.030 [2024-07-22 20:46:32.842519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.030 [2024-07-22 20:46:32.851824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.030 [2024-07-22 20:46:32.852550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.030 [2024-07-22 20:46:32.852597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.030 [2024-07-22 20:46:32.852613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.030 [2024-07-22 20:46:32.852887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.030 [2024-07-22 20:46:32.853130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.030 [2024-07-22 20:46:32.853142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.030 [2024-07-22 20:46:32.853153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.030 [2024-07-22 20:46:32.856922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.030 [2024-07-22 20:46:32.866035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.030 [2024-07-22 20:46:32.866773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.030 [2024-07-22 20:46:32.866819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.030 [2024-07-22 20:46:32.866835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.030 [2024-07-22 20:46:32.867107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.030 [2024-07-22 20:46:32.867359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.030 [2024-07-22 20:46:32.867373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.030 [2024-07-22 20:46:32.867385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.030 [2024-07-22 20:46:32.871137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.030 [2024-07-22 20:46:32.880238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.030 [2024-07-22 20:46:32.880940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.030 [2024-07-22 20:46:32.880964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.030 [2024-07-22 20:46:32.880976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.030 [2024-07-22 20:46:32.881221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.030 [2024-07-22 20:46:32.881459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.030 [2024-07-22 20:46:32.881471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.030 [2024-07-22 20:46:32.881480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.030 [2024-07-22 20:46:32.885239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.030 [2024-07-22 20:46:32.894327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.030 [2024-07-22 20:46:32.895062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.030 [2024-07-22 20:46:32.895108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.030 [2024-07-22 20:46:32.895123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.030 [2024-07-22 20:46:32.895409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.030 [2024-07-22 20:46:32.895654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.030 [2024-07-22 20:46:32.895667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.030 [2024-07-22 20:46:32.895677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.030 [2024-07-22 20:46:32.899443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.030 [2024-07-22 20:46:32.908536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.030 [2024-07-22 20:46:32.909111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.030 [2024-07-22 20:46:32.909156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.030 [2024-07-22 20:46:32.909172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.030 [2024-07-22 20:46:32.909453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.030 [2024-07-22 20:46:32.909697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.030 [2024-07-22 20:46:32.909710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.030 [2024-07-22 20:46:32.909720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.030 [2024-07-22 20:46:32.913479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.030 [2024-07-22 20:46:32.922575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.030 [2024-07-22 20:46:32.923319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.030 [2024-07-22 20:46:32.923365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.030 [2024-07-22 20:46:32.923382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.030 [2024-07-22 20:46:32.923652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.030 [2024-07-22 20:46:32.923893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.030 [2024-07-22 20:46:32.923906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.030 [2024-07-22 20:46:32.923917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.030 [2024-07-22 20:46:32.927684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.030 [2024-07-22 20:46:32.936778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.030 [2024-07-22 20:46:32.937544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.030 [2024-07-22 20:46:32.937589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.030 [2024-07-22 20:46:32.937604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.030 [2024-07-22 20:46:32.937876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.030 [2024-07-22 20:46:32.938118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.030 [2024-07-22 20:46:32.938135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.030 [2024-07-22 20:46:32.938146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.030 [2024-07-22 20:46:32.941916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.031 [2024-07-22 20:46:32.951009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.031 [2024-07-22 20:46:32.951806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.031 [2024-07-22 20:46:32.951851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.031 [2024-07-22 20:46:32.951866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.031 [2024-07-22 20:46:32.952137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.031 [2024-07-22 20:46:32.952386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.031 [2024-07-22 20:46:32.952400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.031 [2024-07-22 20:46:32.952411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.031 [2024-07-22 20:46:32.956173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.031 [2024-07-22 20:46:32.965057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.031 [2024-07-22 20:46:32.965839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.031 [2024-07-22 20:46:32.965885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.031 [2024-07-22 20:46:32.965900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.031 [2024-07-22 20:46:32.966171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.031 [2024-07-22 20:46:32.966424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.031 [2024-07-22 20:46:32.966438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.031 [2024-07-22 20:46:32.966449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.031 [2024-07-22 20:46:32.970206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.031 [2024-07-22 20:46:32.979304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.031 [2024-07-22 20:46:32.980053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.031 [2024-07-22 20:46:32.980099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.031 [2024-07-22 20:46:32.980113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.031 [2024-07-22 20:46:32.980392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.031 [2024-07-22 20:46:32.980634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.031 [2024-07-22 20:46:32.980646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.031 [2024-07-22 20:46:32.980657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.031 [2024-07-22 20:46:32.984421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.031 [2024-07-22 20:46:32.993514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.031 [2024-07-22 20:46:32.994158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.031 [2024-07-22 20:46:32.994183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.031 [2024-07-22 20:46:32.994194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.031 [2024-07-22 20:46:32.994438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.031 [2024-07-22 20:46:32.994676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.031 [2024-07-22 20:46:32.994687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.031 [2024-07-22 20:46:32.994697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.031 [2024-07-22 20:46:32.998447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.031 [2024-07-22 20:46:33.007747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.031 [2024-07-22 20:46:33.008236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.031 [2024-07-22 20:46:33.008260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.031 [2024-07-22 20:46:33.008271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.031 [2024-07-22 20:46:33.008510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.031 [2024-07-22 20:46:33.008748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.031 [2024-07-22 20:46:33.008759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.031 [2024-07-22 20:46:33.008769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.031 [2024-07-22 20:46:33.012519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.031 [2024-07-22 20:46:33.021805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.031 [2024-07-22 20:46:33.022544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.031 [2024-07-22 20:46:33.022589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.031 [2024-07-22 20:46:33.022604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.031 [2024-07-22 20:46:33.022874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.031 [2024-07-22 20:46:33.023117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.031 [2024-07-22 20:46:33.023129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.031 [2024-07-22 20:46:33.023140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.031 [2024-07-22 20:46:33.026907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.031 [2024-07-22 20:46:33.035994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.031 [2024-07-22 20:46:33.036638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.031 [2024-07-22 20:46:33.036663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.031 [2024-07-22 20:46:33.036675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.031 [2024-07-22 20:46:33.036917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.031 [2024-07-22 20:46:33.037155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.031 [2024-07-22 20:46:33.037166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.031 [2024-07-22 20:46:33.037176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.031 [2024-07-22 20:46:33.040979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.294 [2024-07-22 20:46:33.050057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.294 [2024-07-22 20:46:33.050639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.294 [2024-07-22 20:46:33.050684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.294 [2024-07-22 20:46:33.050701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.294 [2024-07-22 20:46:33.050973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.294 [2024-07-22 20:46:33.051223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.294 [2024-07-22 20:46:33.051236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.294 [2024-07-22 20:46:33.051247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.294 [2024-07-22 20:46:33.055000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.294 [2024-07-22 20:46:33.064107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.294 [2024-07-22 20:46:33.064873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.294 [2024-07-22 20:46:33.064919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.294 [2024-07-22 20:46:33.064934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.294 [2024-07-22 20:46:33.065211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.294 [2024-07-22 20:46:33.065454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.294 [2024-07-22 20:46:33.065467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.294 [2024-07-22 20:46:33.065477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.294 [2024-07-22 20:46:33.069235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.294 [2024-07-22 20:46:33.078324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.294 [2024-07-22 20:46:33.079089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.294 [2024-07-22 20:46:33.079135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.294 [2024-07-22 20:46:33.079150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.294 [2024-07-22 20:46:33.079429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.294 [2024-07-22 20:46:33.079672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.294 [2024-07-22 20:46:33.079689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.294 [2024-07-22 20:46:33.079700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.294 [2024-07-22 20:46:33.083463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.294 [2024-07-22 20:46:33.092549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.294 [2024-07-22 20:46:33.093054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.294 [2024-07-22 20:46:33.093079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.294 [2024-07-22 20:46:33.093097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.294 [2024-07-22 20:46:33.093341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.294 [2024-07-22 20:46:33.093579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.294 [2024-07-22 20:46:33.093590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.294 [2024-07-22 20:46:33.093600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.294 [2024-07-22 20:46:33.097348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.294 [2024-07-22 20:46:33.106637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.294 [2024-07-22 20:46:33.107319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.294 [2024-07-22 20:46:33.107364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.294 [2024-07-22 20:46:33.107380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.294 [2024-07-22 20:46:33.107652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.294 [2024-07-22 20:46:33.107893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.294 [2024-07-22 20:46:33.107907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.294 [2024-07-22 20:46:33.107918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.294 [2024-07-22 20:46:33.111686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.294 [2024-07-22 20:46:33.120777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.294 [2024-07-22 20:46:33.121528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.294 [2024-07-22 20:46:33.121573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.295 [2024-07-22 20:46:33.121588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.295 [2024-07-22 20:46:33.121858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.295 [2024-07-22 20:46:33.122099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.295 [2024-07-22 20:46:33.122112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.295 [2024-07-22 20:46:33.122123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.295 [2024-07-22 20:46:33.125885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.295 [2024-07-22 20:46:33.134986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.295 [2024-07-22 20:46:33.135770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.295 [2024-07-22 20:46:33.135815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.295 [2024-07-22 20:46:33.135831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.295 [2024-07-22 20:46:33.136101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.295 [2024-07-22 20:46:33.136349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.295 [2024-07-22 20:46:33.136363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.295 [2024-07-22 20:46:33.136374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.295 [2024-07-22 20:46:33.140131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.295 [2024-07-22 20:46:33.149214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.295 [2024-07-22 20:46:33.149831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.295 [2024-07-22 20:46:33.149876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.295 [2024-07-22 20:46:33.149892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.295 [2024-07-22 20:46:33.150162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.295 [2024-07-22 20:46:33.150413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.295 [2024-07-22 20:46:33.150426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.295 [2024-07-22 20:46:33.150436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.295 [2024-07-22 20:46:33.154191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.295 [2024-07-22 20:46:33.163286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.295 [2024-07-22 20:46:33.164020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.295 [2024-07-22 20:46:33.164065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.295 [2024-07-22 20:46:33.164080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.295 [2024-07-22 20:46:33.164358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.295 [2024-07-22 20:46:33.164600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.295 [2024-07-22 20:46:33.164612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.295 [2024-07-22 20:46:33.164623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.295 [2024-07-22 20:46:33.168376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.295 [2024-07-22 20:46:33.177449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.295 [2024-07-22 20:46:33.178262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.295 [2024-07-22 20:46:33.178307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.295 [2024-07-22 20:46:33.178326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.295 [2024-07-22 20:46:33.178596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.295 [2024-07-22 20:46:33.178838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.295 [2024-07-22 20:46:33.178850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.295 [2024-07-22 20:46:33.178861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.295 [2024-07-22 20:46:33.182627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.295 [2024-07-22 20:46:33.191491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.295 [2024-07-22 20:46:33.192254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.295 [2024-07-22 20:46:33.192300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.295 [2024-07-22 20:46:33.192316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.295 [2024-07-22 20:46:33.192587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.295 [2024-07-22 20:46:33.192829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.295 [2024-07-22 20:46:33.192841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.295 [2024-07-22 20:46:33.192852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.295 [2024-07-22 20:46:33.196625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.295 [2024-07-22 20:46:33.205726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.295 [2024-07-22 20:46:33.206468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.295 [2024-07-22 20:46:33.206513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.295 [2024-07-22 20:46:33.206529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.295 [2024-07-22 20:46:33.206799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.295 [2024-07-22 20:46:33.207040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.295 [2024-07-22 20:46:33.207053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.295 [2024-07-22 20:46:33.207064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.295 [2024-07-22 20:46:33.210823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.295 [2024-07-22 20:46:33.219908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.295 [2024-07-22 20:46:33.220576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.295 [2024-07-22 20:46:33.220601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.295 [2024-07-22 20:46:33.220612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.295 [2024-07-22 20:46:33.220859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.295 [2024-07-22 20:46:33.221097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.295 [2024-07-22 20:46:33.221112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.295 [2024-07-22 20:46:33.221123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.295 [2024-07-22 20:46:33.224872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.295 [2024-07-22 20:46:33.233952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.295 [2024-07-22 20:46:33.234676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.295 [2024-07-22 20:46:33.234721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.295 [2024-07-22 20:46:33.234737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.295 [2024-07-22 20:46:33.235006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.295 [2024-07-22 20:46:33.235255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.295 [2024-07-22 20:46:33.235269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.296 [2024-07-22 20:46:33.235280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.296 [2024-07-22 20:46:33.239034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.296 [2024-07-22 20:46:33.248145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.296 [2024-07-22 20:46:33.248769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.296 [2024-07-22 20:46:33.248813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.296 [2024-07-22 20:46:33.248829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.296 [2024-07-22 20:46:33.249099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.296 [2024-07-22 20:46:33.249348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.296 [2024-07-22 20:46:33.249361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.296 [2024-07-22 20:46:33.249373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.296 [2024-07-22 20:46:33.253122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.296 [2024-07-22 20:46:33.262216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.296 [2024-07-22 20:46:33.262988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.296 [2024-07-22 20:46:33.263033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.296 [2024-07-22 20:46:33.263048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.296 [2024-07-22 20:46:33.263327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.296 [2024-07-22 20:46:33.263569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.296 [2024-07-22 20:46:33.263581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.296 [2024-07-22 20:46:33.263592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.296 [2024-07-22 20:46:33.267350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.296 [2024-07-22 20:46:33.276451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.296 [2024-07-22 20:46:33.277052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.296 [2024-07-22 20:46:33.277097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.296 [2024-07-22 20:46:33.277112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.296 [2024-07-22 20:46:33.277391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.296 [2024-07-22 20:46:33.277633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.296 [2024-07-22 20:46:33.277645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.296 [2024-07-22 20:46:33.277662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.296 [2024-07-22 20:46:33.281419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.296 [2024-07-22 20:46:33.290498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.296 [2024-07-22 20:46:33.291145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.296 [2024-07-22 20:46:33.291169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.296 [2024-07-22 20:46:33.291181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.296 [2024-07-22 20:46:33.291425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.296 [2024-07-22 20:46:33.291687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.296 [2024-07-22 20:46:33.291699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.296 [2024-07-22 20:46:33.291709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.296 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:21.296 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:39:21.296 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:21.296 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:21.296 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:21.296 [2024-07-22 20:46:33.295459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.296 [2024-07-22 20:46:33.304531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.296 [2024-07-22 20:46:33.305195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.296 [2024-07-22 20:46:33.305222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.296 [2024-07-22 20:46:33.305234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.296 [2024-07-22 20:46:33.305470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.296 [2024-07-22 20:46:33.305706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.296 [2024-07-22 20:46:33.305718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.296 [2024-07-22 20:46:33.305728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.296 [2024-07-22 20:46:33.309478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.556 [2024-07-22 20:46:33.318550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.556 [2024-07-22 20:46:33.319215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.557 [2024-07-22 20:46:33.319237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.557 [2024-07-22 20:46:33.319248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.557 [2024-07-22 20:46:33.319485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.557 [2024-07-22 20:46:33.319722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.557 [2024-07-22 20:46:33.319733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.557 [2024-07-22 20:46:33.319743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.557 [2024-07-22 20:46:33.323497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.557 [2024-07-22 20:46:33.332564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.557 [2024-07-22 20:46:33.333094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.557 [2024-07-22 20:46:33.333116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.557 [2024-07-22 20:46:33.333127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.557 [2024-07-22 20:46:33.333368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.557 [2024-07-22 20:46:33.333606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.557 [2024-07-22 20:46:33.333616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.557 [2024-07-22 20:46:33.333626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:21.557 [2024-07-22 20:46:33.337381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.557 [2024-07-22 20:46:33.339191] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:21.557 [2024-07-22 20:46:33.346669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.557 [2024-07-22 20:46:33.347450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.557 [2024-07-22 20:46:33.347496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.557 [2024-07-22 20:46:33.347511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.557 [2024-07-22 20:46:33.347780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.557 [2024-07-22 20:46:33.348022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.557 [2024-07-22 20:46:33.348034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.557 [2024-07-22 20:46:33.348050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.557 [2024-07-22 20:46:33.351818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:21.557 [2024-07-22 20:46:33.360917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.557 [2024-07-22 20:46:33.361717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.557 [2024-07-22 20:46:33.361763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.557 [2024-07-22 20:46:33.361778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.557 [2024-07-22 20:46:33.362048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.557 [2024-07-22 20:46:33.362299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.557 [2024-07-22 20:46:33.362313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.557 [2024-07-22 20:46:33.362323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.557 [2024-07-22 20:46:33.366075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.557 [2024-07-22 20:46:33.374951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.557 [2024-07-22 20:46:33.375713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.557 [2024-07-22 20:46:33.375759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.557 [2024-07-22 20:46:33.375775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.557 [2024-07-22 20:46:33.376047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.557 [2024-07-22 20:46:33.376298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.557 [2024-07-22 20:46:33.376311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.557 [2024-07-22 20:46:33.376323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.557 [2024-07-22 20:46:33.380075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.557 [2024-07-22 20:46:33.389164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.557 [2024-07-22 20:46:33.389920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.557 [2024-07-22 20:46:33.389965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.557 [2024-07-22 20:46:33.389981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.557 [2024-07-22 20:46:33.390261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.557 [2024-07-22 20:46:33.390504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.557 [2024-07-22 20:46:33.390517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.557 [2024-07-22 20:46:33.390532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.557 [2024-07-22 20:46:33.394290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.557 Malloc0 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:21.557 [2024-07-22 20:46:33.403377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.557 [2024-07-22 20:46:33.404026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.557 [2024-07-22 20:46:33.404051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.557 [2024-07-22 20:46:33.404063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.557 [2024-07-22 20:46:33.404307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.557 [2024-07-22 20:46:33.404545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.557 [2024-07-22 20:46:33.404557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.557 [2024-07-22 20:46:33.404566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.557 [2024-07-22 20:46:33.408322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:21.557 [2024-07-22 20:46:33.417626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.557 [2024-07-22 20:46:33.418342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.557 [2024-07-22 20:46:33.418387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388680 with addr=10.0.0.2, port=4420 00:39:21.557 [2024-07-22 20:46:33.418403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:21.557 [2024-07-22 20:46:33.418677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:21.557 [2024-07-22 20:46:33.418919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:21.557 [2024-07-22 20:46:33.418932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:21.557 [2024-07-22 20:46:33.418943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:21.557 [2024-07-22 20:46:33.422707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:21.557 [2024-07-22 20:46:33.431398] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:21.557 [2024-07-22 20:46:33.431797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.557 20:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3894988 00:39:21.557 [2024-07-22 20:46:33.478385] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:31.557 00:39:31.557 Latency(us) 00:39:31.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.557 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:31.557 Verification LBA range: start 0x0 length 0x4000 00:39:31.557 Nvme1n1 : 15.01 7552.07 29.50 9156.33 0.00 7632.94 856.75 23592.96 00:39:31.557 =================================================================================================================== 00:39:31.557 Total : 7552.07 29.50 9156.33 0.00 7632.94 856.75 23592.96 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:31.557 rmmod nvme_tcp 00:39:31.557 rmmod nvme_fabrics 00:39:31.557 rmmod nvme_keyring 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3896253 ']' 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3896253 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3896253 ']' 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3896253 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3896253 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3896253' 00:39:31.557 killing process with pid 3896253 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3896253 00:39:31.557 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3896253 00:39:32.129 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:32.129 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:32.129 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:32.129 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:32.129 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:32.129 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.129 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:32.129 20:46:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.044 20:46:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:34.044 00:39:34.044 real 0m29.987s 00:39:34.044 user 1m11.248s 00:39:34.044 sys 0m7.110s 00:39:34.044 20:46:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:34.044 20:46:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:34.044 ************************************ 00:39:34.044 END TEST nvmf_bdevperf 00:39:34.044 ************************************ 00:39:34.044 20:46:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:39:34.044 20:46:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:34.044 20:46:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:34.044 20:46:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:34.044 20:46:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.044 ************************************ 00:39:34.044 START TEST nvmf_target_disconnect 00:39:34.044 ************************************ 00:39:34.044 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:34.305 * Looking for test storage... 00:39:34.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:39:34.305 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:39:34.306 20:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:40.893 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:40.893 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:40.893 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:40.894 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:40.894 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:40.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:40.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:39:40.894 00:39:40.894 --- 10.0.0.2 ping statistics --- 00:39:40.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.894 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:40.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:40.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:39:40.894 00:39:40.894 --- 10.0.0.1 ping statistics --- 00:39:40.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.894 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:40.894 ************************************ 00:39:40.894 START TEST nvmf_target_disconnect_tc1 00:39:40.894 ************************************ 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:39:40.894 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:40.894 EAL: No free 2048 kB hugepages reported on node 1 00:39:41.154 [2024-07-22 20:46:52.941598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.154 [2024-07-22 20:46:52.941668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000388400 with addr=10.0.0.2, port=4420 00:39:41.154 [2024-07-22 20:46:52.941728] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:41.154 [2024-07-22 20:46:52.941741] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:41.154 [2024-07-22 20:46:52.941753] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:39:41.154 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:39:41.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:39:41.154 Initializing NVMe Controllers 00:39:41.154 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:39:41.154 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:41.154 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:41.154 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:41.154 00:39:41.154 real 0m0.194s 00:39:41.154 user 0m0.087s 00:39:41.154 sys 0m0.107s 00:39:41.154 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:41.154 20:46:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:41.154 ************************************ 00:39:41.154 END TEST nvmf_target_disconnect_tc1 00:39:41.154 ************************************ 00:39:41.154 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:39:41.154 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:41.155 ************************************ 00:39:41.155 START TEST nvmf_target_disconnect_tc2 00:39:41.155 ************************************ 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3902367 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3902367 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3902367 ']' 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:41.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:41.155 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:41.155 [2024-07-22 20:46:53.137035] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:41.155 [2024-07-22 20:46:53.137134] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:41.414 EAL: No free 2048 kB hugepages reported on node 1 00:39:41.414 [2024-07-22 20:46:53.279874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:41.674 [2024-07-22 20:46:53.463988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:41.674 [2024-07-22 20:46:53.464032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:41.674 [2024-07-22 20:46:53.464045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:41.674 [2024-07-22 20:46:53.464054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:41.674 [2024-07-22 20:46:53.464065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:41.674 [2024-07-22 20:46:53.464258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:39:41.674 [2024-07-22 20:46:53.464451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:39:41.674 [2024-07-22 20:46:53.464782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:39:41.674 [2024-07-22 20:46:53.464803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:39:41.935 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:41.935 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:39:41.935 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:41.935 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:41.935 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:41.935 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:41.935 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:41.935 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:41.935 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:42.196 Malloc0 00:39:42.196 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.196 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:42.196 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.196 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:42.196 [2024-07-22 20:46:53.986230] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:42.196 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.196 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:42.196 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.196 20:46:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:42.196 [2024-07-22 20:46:54.015968] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3902552 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:39:42.196 20:46:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:42.196 EAL: No free 2048 kB hugepages reported on node 1 00:39:44.112 20:46:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3902367 00:39:44.112 20:46:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 [2024-07-22 20:46:56.052760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Read completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 Write completed with error (sct=0, sc=8) 00:39:44.112 starting I/O failed 00:39:44.112 [2024-07-22 20:46:56.053120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:44.112 [2024-07-22 20:46:56.053642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.112 [2024-07-22 20:46:56.053676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.112 qpair failed and we were unable to recover it. 00:39:44.112 [2024-07-22 20:46:56.054034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.112 [2024-07-22 20:46:56.054047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.112 qpair failed and we were unable to recover it. 00:39:44.112 [2024-07-22 20:46:56.054482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.112 [2024-07-22 20:46:56.054518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.112 qpair failed and we were unable to recover it. 00:39:44.112 [2024-07-22 20:46:56.054771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.112 [2024-07-22 20:46:56.054783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.112 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.055051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.055061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.055462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.055496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.055706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.055720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.056097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.056108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.056479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.056489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.056860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.056871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.057243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.057254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.057644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.057654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.057856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.057866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.058238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.058249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.058641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.058651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.058986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.058997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.059219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.059229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.059619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.059630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.060014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.060025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.060395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.060405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.060717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.060727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.060940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.060951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.061192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.061206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.061613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.061623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.061823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.061833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.062163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.062173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.062506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.062516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.062847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.062857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.063196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.063209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.063550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.063561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.063910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.063920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.064302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.064313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.064703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.064713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.065090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.065100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.065451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.065462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.065761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.065771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.066144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.066154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.066485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.066495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.066692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.066703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.067085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.067095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.113 qpair failed and we were unable to recover it. 00:39:44.113 [2024-07-22 20:46:56.067163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.113 [2024-07-22 20:46:56.067173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.067518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.067528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.067787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.067799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.068111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.068121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.068432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.068442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.068756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.068766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.069151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.069161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.069419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.069428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.069749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.069759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.069999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.070009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.070388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.070397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.070748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.070758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.071099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.071108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.071468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.071478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.071690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.071699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.072071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.072090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.072370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.072380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.072741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.072752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.073077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.073086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.073290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.073300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.073676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.073685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.074018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.074027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.074402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.074418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.074796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.074811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.075139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.075149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.075415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.075425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.075716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.075725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.076055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.076063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.076434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.076444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.076784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.076793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.077177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.077187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.077449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.077459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.077806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.077815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.078118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.078128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.078491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.078501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.078838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.078847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.079201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.079212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.114 [2024-07-22 20:46:56.080365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.114 [2024-07-22 20:46:56.080388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.114 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.080762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.080773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.081115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.081124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.081531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.081541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.081893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.081902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.082273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.082285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.082588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.082598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.082935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.082945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.083242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.083253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.083507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.083518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.083869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.083879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.084121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.084130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.084603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.084613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.084945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.084954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.085312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.085323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.085585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.085594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.085905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.085915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.086291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.086301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.086677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.086686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.087016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.087027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.087365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.087375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.087729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.087739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.088120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.088130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.088489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.088498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.088873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.088883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.089205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.089216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.089462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.089472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.089805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.089814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.090188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.090197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.090446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.090456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.090809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.090818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.091033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.091042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.091395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.091404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.091779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.091789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.092017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.092027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.092276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.092287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.092666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.115 [2024-07-22 20:46:56.092675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.115 qpair failed and we were unable to recover it. 00:39:44.115 [2024-07-22 20:46:56.093006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.093015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.093339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.093349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.093712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.093722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.094060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.094069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.094419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.094429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.094769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.094778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.094966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.094976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.095359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.095368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.095702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.095713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.096049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.096058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.096411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.096421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.096596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.096606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.097023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.097033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.097418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.097432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.097839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.097848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.098094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.098103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.098298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.098310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.098639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.098649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.098920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.098929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.099302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.099312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.099683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.099701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.099968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.099977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.100394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.100404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.100606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.100616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.100983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.100993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.101330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.101340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.101597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.101606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.102004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.102013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.102371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.102381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.102580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.102590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.102977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.102986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.103328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.103338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.103708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.103718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.104064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.116 [2024-07-22 20:46:56.104074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.116 qpair failed and we were unable to recover it. 00:39:44.116 [2024-07-22 20:46:56.104425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.104434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.104825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.104835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.105265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.105275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.105584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.105594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.105846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.105855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.106246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.106256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.106531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.106540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.106895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.106904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.107253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.107263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.107604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.107614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.107944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.107954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.108307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.108317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.108700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.108709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.109037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.109046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.109405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.109416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.110117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.110137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.110500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.110511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.110957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.110967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.111307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.111317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.111682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.111691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.112022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.112032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.112379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.112389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.112735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.112744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.113102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.113112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.113470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.113480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.113813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.113823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.114177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.114187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.114564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.114575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.114928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.114937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.115268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.115278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.115663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.115673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.115914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.115924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.117 [2024-07-22 20:46:56.116277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.117 [2024-07-22 20:46:56.116287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.117 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.116637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.116646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.117021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.117031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.117337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.117346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.117474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.117485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.117859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.117868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.118204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.118213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.118554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.118564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.118824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.118834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.119188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.119199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.119552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.119561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.119931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.119961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.120324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.120334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.120576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.120586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.120971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.120982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.121316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.121325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.121627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.121636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.122016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.122025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.122261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.122271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.122518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.122527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.122917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.122927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.123190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.123202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.123612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.123623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.123998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.124007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.124281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.124291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.124655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.124664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.125057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.125067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.125454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.125464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.125834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.125844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.126178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.126187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.126398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.126407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.126647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.126656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.127021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.127030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.127422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.127431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.127687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.127697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.128078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.128088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.128446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.128457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.118 qpair failed and we were unable to recover it. 00:39:44.118 [2024-07-22 20:46:56.128808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.118 [2024-07-22 20:46:56.128817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.119 qpair failed and we were unable to recover it. 00:39:44.119 [2024-07-22 20:46:56.129191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.119 [2024-07-22 20:46:56.129206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.119 qpair failed and we were unable to recover it. 00:39:44.119 [2024-07-22 20:46:56.129551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.119 [2024-07-22 20:46:56.129560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.119 qpair failed and we were unable to recover it. 00:39:44.391 [2024-07-22 20:46:56.129916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.391 [2024-07-22 20:46:56.129928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.391 qpair failed and we were unable to recover it. 00:39:44.391 [2024-07-22 20:46:56.130304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.391 [2024-07-22 20:46:56.130314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.391 qpair failed and we were unable to recover it. 00:39:44.391 [2024-07-22 20:46:56.130685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.391 [2024-07-22 20:46:56.130695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.391 qpair failed and we were unable to recover it. 00:39:44.391 [2024-07-22 20:46:56.131049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.391 [2024-07-22 20:46:56.131058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.391 qpair failed and we were unable to recover it. 00:39:44.391 [2024-07-22 20:46:56.131415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.391 [2024-07-22 20:46:56.131424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.391 qpair failed and we were unable to recover it. 00:39:44.391 [2024-07-22 20:46:56.131784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.391 [2024-07-22 20:46:56.131793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.391 qpair failed and we were unable to recover it. 00:39:44.391 [2024-07-22 20:46:56.132142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.391 [2024-07-22 20:46:56.132151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.391 qpair failed and we were unable to recover it. 00:39:44.391 [2024-07-22 20:46:56.132560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.391 [2024-07-22 20:46:56.132569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.391 qpair failed and we were unable to recover it. 00:39:44.391 [2024-07-22 20:46:56.132931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.391 [2024-07-22 20:46:56.132941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.391 qpair failed and we were unable to recover it. 00:39:44.391 [2024-07-22 20:46:56.133296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.391 [2024-07-22 20:46:56.133306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.391 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.133679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.133689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.134046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.134056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.134409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.134419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.134763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.134772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.135130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.135139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.135471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.135481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.135824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.135834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.136167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.136178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.136542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.136552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.136802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.136811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.137166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.137175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.137542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.137551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.137885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.137897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.138261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.138271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.138606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.138615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.139036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.139045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.139423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.139432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.139773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.139784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.140026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.140036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.140280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.140290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.140698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.140707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.141051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.141060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.141336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.141346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.141405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.141414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.141716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.141726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.142102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.142117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.142468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.142479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.142669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.142679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.143001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.143011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.143258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.143268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.143563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.143572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.143842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.143852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.144209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.144220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.144547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.144556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.144825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.144835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.392 qpair failed and we were unable to recover it. 00:39:44.392 [2024-07-22 20:46:56.145028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.392 [2024-07-22 20:46:56.145039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.145279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.145288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.145669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.145679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.146052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.146062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.146423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.146433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.146767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.146777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.147118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.147127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.147537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.147546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.147935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.147946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.148281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.148291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.148636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.148645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.149024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.149034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.149415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.149425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.149640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.149650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.150033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.150042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.150378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.150387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.150748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.150757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.151096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.151108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.151479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.151490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.151848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.151857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.152210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.152219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.152572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.152589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.152943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.152952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.153295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.153305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.153662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.153672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.154001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.154011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.154398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.154407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.154659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.154668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.154925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.154934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.155314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.155324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.155663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.155672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.156004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.156013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.156368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.156378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.156703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.156712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.157044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.157053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.157391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.157401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.157715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.157724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.393 qpair failed and we were unable to recover it. 00:39:44.393 [2024-07-22 20:46:56.158070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.393 [2024-07-22 20:46:56.158080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.158455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.158464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.158806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.158815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.159142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.159151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.159508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.159517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.159779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.159789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.160144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.160154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.160532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.160542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.160892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.160902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.161218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.161227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.161597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.161606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.161959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.161969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.162330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.162340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.162694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.162704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.163056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.163065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.163284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.163293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.163656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.163666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.163964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.163976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.164319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.164330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.164719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.164729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.164990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.165001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.165372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.165382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.165764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.165773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.166016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.166026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.166364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.166374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.166748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.166758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.167116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.167125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.167531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.167541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.167919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.167929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.168271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.168281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.168668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.168677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.169007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.169017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.169374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.169384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.169744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.169754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.169991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.170001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.394 qpair failed and we were unable to recover it. 00:39:44.394 [2024-07-22 20:46:56.170454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.394 [2024-07-22 20:46:56.170463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.170803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.170813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.171056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.171066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.171434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.171444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.171777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.171786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.172083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.172093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.172470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.172480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.172811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.172821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.173176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.173185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.173597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.173607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.173997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.174006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.174340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.174350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.174821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.174830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.175206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.175217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.175565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.175575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.175935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.175944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.176386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.176419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.176820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.176833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.177212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.177222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.177564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.177574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.177911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.177921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.178338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.178349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.178559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.178571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.178945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.178956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.179259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.179268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.179622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.179638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.180022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.180031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.180392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.180402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.180630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.180640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.181023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.181032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.181399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.181409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.181785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.181795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.182181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.182190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.182375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.182386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.182640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.182650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.183076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.183085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.183425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.395 [2024-07-22 20:46:56.183435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.395 qpair failed and we were unable to recover it. 00:39:44.395 [2024-07-22 20:46:56.183626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.183636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.183879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.183889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.184269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.184279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.184614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.184624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.185010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.185020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.185399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.185408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.185776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.185785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.186170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.186179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.186435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.186445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.186870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.186884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.187223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.187232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.187563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.187572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.187926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.187936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.188268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.188278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.188636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.188645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.188984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.188994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.189328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.189338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.189702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.189711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.190042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.190051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.190392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.190401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.190763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.190773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.191104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.191114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.191451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.191461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.191813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.191822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.192153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.192163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.192542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.192551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.192872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.192883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.193231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.193241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.193610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.193620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.193948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.396 [2024-07-22 20:46:56.193958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.396 qpair failed and we were unable to recover it. 00:39:44.396 [2024-07-22 20:46:56.194278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.194288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.194662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.194671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.195004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.195013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.195346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.195355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.195634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.195644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.195981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.195990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.196317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.196326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.196671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.196680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.197011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.197020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.197236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.197245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.197678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.197688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.198001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.198012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.198355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.198366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.198710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.198719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.199038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.199047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.199406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.199416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.199570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.199580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.199953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.199963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.200297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.200307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.200613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.200623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.200984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.200994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.201363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.201373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.201702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.201711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.202042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.202052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.202400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.202410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.202787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.202798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.203038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.203048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.203427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.203436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.203758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.203768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.204018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.204027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.204395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.204405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.204774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.204783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.205148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.205158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.205519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.205528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.205863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.205873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.206230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.206240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.397 qpair failed and we were unable to recover it. 00:39:44.397 [2024-07-22 20:46:56.206610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.397 [2024-07-22 20:46:56.206619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.206958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.206967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.207279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.207289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.207655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.207664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.207995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.208004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.208384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.208394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.208751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.208761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.209110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.209125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.209550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.209559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.209881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.209891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.210252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.210262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.210583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.210593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.210930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.210940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.211254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.211264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.212342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.212364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.212800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.212811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.213159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.213169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.213512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.213522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.213885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.213895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.214274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.214283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.214646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.214656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.215011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.215021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.215374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.215384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.215705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.215715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.216065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.216074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.216325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.216335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.216704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.216713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.217046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.217056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.217533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.217542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.217894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.217905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.218258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.218268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.218718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.218727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.218980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.218990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.219340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.219349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.219692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.219703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.220057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.220067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.398 [2024-07-22 20:46:56.220410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.398 [2024-07-22 20:46:56.220420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.398 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.220777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.220787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.221140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.221150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.221506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.221517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.221856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.221867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.222228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.222239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.222615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.222624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.223046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.223055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.223341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.223351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.223700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.223710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.223892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.223903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.224271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.224280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.224635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.224644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.224821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.224831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.224972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.224981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.225325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.225335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.225702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.225711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.226050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.226059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.226337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.226347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.226713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.226722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.227139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.227149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.227406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.227416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.227793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.227803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.228146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.228156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.228521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.228531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.228751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.228761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.229076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.229085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.229445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.229455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.229816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.229825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.230177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.230187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.230589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.230599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.230979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.230988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.231315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.231324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.231676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.231688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.232080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.232090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.232461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.232474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.232802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.399 [2024-07-22 20:46:56.232812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.399 qpair failed and we were unable to recover it. 00:39:44.399 [2024-07-22 20:46:56.233069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.233078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.233467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.233477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.233840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.233850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.234221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.234230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.234588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.234598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.234977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.234986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.235322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.235332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.235702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.235711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.236062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.236071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.236412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.236421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.236779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.236788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.237119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.237129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.237433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.237443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.237771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.237781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.238136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.238145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.238497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.238507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.238835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.238844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.239202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.239213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.239336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.239346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.239700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.239709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.240054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.240063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.240402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.240412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.240764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.240773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.241108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.241118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.241463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.241472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.241796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.241806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.242167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.242177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.242539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.242549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.242883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.242892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.243254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.243264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.243516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.243525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.243872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.243880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.244283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.244298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.244697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.244706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.245038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.245047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.245420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.245429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.245764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.245774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.400 qpair failed and we were unable to recover it. 00:39:44.400 [2024-07-22 20:46:56.246095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.400 [2024-07-22 20:46:56.246105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.246490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.246499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.246830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.246840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.247107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.247117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.247478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.247488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.247830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.247840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.248204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.248214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.248582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.248591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.248925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.248934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.249285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.249294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.249656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.249665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.250050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.250059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.250390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.250400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.250722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.250731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.251101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.251110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.251450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.251459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.251820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.251829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.252163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.252172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.252538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.252548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.252948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.252957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.253292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.253303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.253710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.253719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.254054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.254063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.254391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.254400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.254753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.254763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.255091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.255104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.255442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.255452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.255839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.255848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.256185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.256195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.256558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.256567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.256940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.256949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.257135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.257146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.257469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.401 [2024-07-22 20:46:56.257480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.401 qpair failed and we were unable to recover it. 00:39:44.401 [2024-07-22 20:46:56.257835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.257845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.258207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.258218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.258570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.258579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.258913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.258922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.259277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.259286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.259672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.259681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.260014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.260025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.260355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.260365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.260689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.260698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.261020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.261029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.261396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.261406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.261781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.261790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.261970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.261980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.262325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.262335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.262680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.262690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.263047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.263056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.263336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.263346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.263692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.263701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.263953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.263962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.264023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.264034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.264363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.264373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.264562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.264571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.264922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.264931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.265301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.265311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.265666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.265675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.266063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.266073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.266421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.266431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.266761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.266770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.267096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.267105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.267511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.267520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.267851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.267860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.268216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.268226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.268591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.268600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.268884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.268894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.269252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.269262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.269511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.269521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.269905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.269914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.402 [2024-07-22 20:46:56.270245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.402 [2024-07-22 20:46:56.270255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.402 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.270647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.270656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.270987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.270998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.271362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.271371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.271749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.271758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.272032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.272043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.272423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.272432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.272767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.272776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.273133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.273142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.273364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.273376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.273743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.273753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.274138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.274147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.274525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.274534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.274924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.274933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.275186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.275195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.275559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.275569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.275934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.275943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.276224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.276233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.276585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.276594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.276946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.276960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.277323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.277333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.277682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.277692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.277951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.277961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.278266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.278275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.278642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.278652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.279023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.279032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.279365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.279376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.279725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.279734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.280071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.280080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.280426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.280436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.280792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.280801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.281133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.281142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.281502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.281511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.281765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.281774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.282151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.282160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.282270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.282279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.282594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.282604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.403 [2024-07-22 20:46:56.282988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.403 [2024-07-22 20:46:56.282997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.403 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.283339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.283350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.283716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.283725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.284056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.284065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.284476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.284486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.284688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.284698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.285083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.285093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.285462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.285472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.285800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.285809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.286165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.286175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.286395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.286406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.286788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.286798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.287154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.287165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.287527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.287537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.287880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.287890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.288245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.288254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.288588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.288597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.288938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.288947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.289278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.289287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.289653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.289662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.290004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.290013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.290378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.290388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.290743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.290752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.291083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.291092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.291415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.291424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.291782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.291791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.292126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.292135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.292470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.292479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.292834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.292843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.293179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.293189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.293547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.293556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.293817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.293826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.294160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.294169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.294495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.294505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.294759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.294769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.295176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.295186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.295543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.295553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.295813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.295823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.404 [2024-07-22 20:46:56.296206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.404 [2024-07-22 20:46:56.296217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.404 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.296555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.296564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.296894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.296903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.297286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.297296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.297674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.297684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.298026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.298035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.298382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.298391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.298763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.298773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.299132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.299145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.299406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.299416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.299745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.299755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.300010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.300019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.300288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.300297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.300634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.300643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.301009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.301021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.301376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.301385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.301720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.301729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.301967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.301977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.302365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.302374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.302627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.302637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.302833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.302843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.303172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.303182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.303518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.303528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.303884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.303894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.304225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.304234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.304604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.304614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.304934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.304944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.305314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.305323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.305683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.305693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.305936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.305945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.306136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.306146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.306243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.306254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.306592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.405 [2024-07-22 20:46:56.306602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.405 qpair failed and we were unable to recover it. 00:39:44.405 [2024-07-22 20:46:56.306871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.306880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.307226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.307236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.307608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.307617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.307982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.307991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.308360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.308370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.308709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.308718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.309040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.309049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.309490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.309500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.309844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.309854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.310187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.310196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.310563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.310573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.310927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.310937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.311267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.311277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.311549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.311558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.311808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.311817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.312180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.312189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.312522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.312531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.312859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.312868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.313235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.313245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.313655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.313664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.313917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.313927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.314173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.314184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.314524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.314533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.314885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.314894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.315153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.315162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.315518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.315531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.315884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.315893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.316228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.316237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.316593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.316602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.316959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.316968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.317306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.317316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.317713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.317722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.318092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.318102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.318455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.318464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.318826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.318836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.319193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.319205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.319561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.319570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.406 [2024-07-22 20:46:56.319908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.406 [2024-07-22 20:46:56.319917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.406 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.320236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.320246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.320606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.320618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.320955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.320964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.321317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.321326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.321656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.321665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.321998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.322007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.322333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.322342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.322708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.322717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.323071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.323080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.323461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.323470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.323805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.323815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.324174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.324183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.324418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.324428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.324754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.324763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.325140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.325149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.325576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.325585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.325927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.325937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.326314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.326324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.326577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.326586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.326954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.326964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.327344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.327354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.327748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.327757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.328097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.328106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.328485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.328496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.328851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.328860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.329197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.329209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.329585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.329595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.329849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.329858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.330214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.330224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.330576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.330585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.330916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.330925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.331289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.331298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.331654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.331664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.332002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.332011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.332362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.332371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.332707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.332716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.332982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.407 [2024-07-22 20:46:56.332991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.407 qpair failed and we were unable to recover it. 00:39:44.407 [2024-07-22 20:46:56.333408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.333417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.333764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.333773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.334150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.334160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.334583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.334592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.334955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.334964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.335316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.335326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.335691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.335700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.335945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.335954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.336136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.336146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.336515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.336525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.336770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.336779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.336969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.336980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.337357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.337366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.337786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.337796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.338162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.338171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.338524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.338533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.338783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.338793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.339151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.339160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.339579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.339588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.339969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.339978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.340321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.340330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.340650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.340660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.341035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.341045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.341414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.341424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.341755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.341764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.342142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.342151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.342505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.342517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.342760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.342770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.342962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.342977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.343286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.343295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.343607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.343616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.343990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.343999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.344358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.344368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.344741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.344750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.344971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.344980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.345317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.345328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.345737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.345747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.408 [2024-07-22 20:46:56.346081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.408 [2024-07-22 20:46:56.346090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.408 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.346424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.346433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.346687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.346696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.347159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.347168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.347522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.347532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.347844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.347853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.348198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.348210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.348455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.348464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.348789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.348798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.349178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.349187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.349545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.349554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.349857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.349867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.350258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.350268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.350648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.350657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.350995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.351005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.351389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.351398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.351787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.351796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.352147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.352157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.352539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.352549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.352929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.352939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.353283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.353292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.353642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.353651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.353991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.354000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.354354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.354364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.354697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.354706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.354962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.354971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.355241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.355251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.355588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.355597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.355866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.355875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.356132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.356143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.356497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.356507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.356873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.356883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.357239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.357248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.357608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.357617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.357983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.357992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.358368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.358378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.358730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.358740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.359101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.409 [2024-07-22 20:46:56.359110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.409 qpair failed and we were unable to recover it. 00:39:44.409 [2024-07-22 20:46:56.359449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.359460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.359808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.359817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.360148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.360160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.360525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.360534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.360913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.360922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.361374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.361383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.361745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.361754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.362128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.362137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.362540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.362549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.362937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.362947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.363337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.363347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.363709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.363718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.364037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.364046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.364412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.364421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.364758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.364767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.365113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.365123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.365492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.365502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.365838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.365850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.366214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.366224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.366604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.366613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.366972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.366983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.367330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.367340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.367687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.367697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.368055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.368064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.368397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.368407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.368783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.368792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.369126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.369135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.369516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.369525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.369884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.369893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.370229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.370239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.370600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.370609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.410 qpair failed and we were unable to recover it. 00:39:44.410 [2024-07-22 20:46:56.370945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.410 [2024-07-22 20:46:56.370954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.371317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.371327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.371585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.371594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.371850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.371859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.372237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.372247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.372576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.372586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.372862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.372871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.373257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.373266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.373679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.373688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.374024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.374033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.374468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.374478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.374847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.374856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.375241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.375250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.375585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.375594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.375954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.375971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.376349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.376358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.376729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.376738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.376928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.376939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.377307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.377317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.377604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.377613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.377973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.377981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.378355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.378365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.378734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.378744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.379007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.379017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.379457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.379467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.379804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.379814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.380176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.380185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.380380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.380392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.380818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.380827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.381178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.381188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.381547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.381556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.381795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.381806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.382007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.382016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.382415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.382425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.382677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.382686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.383045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.383054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.383375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.383384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.411 [2024-07-22 20:46:56.383633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.411 [2024-07-22 20:46:56.383643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.411 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.383828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.383838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.384165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.384174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.384519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.384528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.384902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.384911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.385143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.385153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.385540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.385550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.385791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.385801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.386175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.386185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.386542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.386551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.386914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.386923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.387307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.387317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.387652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.387663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.388023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.388035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.388374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.388385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.388644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.388653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.389102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.389111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.389478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.389489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.389875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.389884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.390274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.390284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.390677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.390686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.391053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.391062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.391408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.391418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.391749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.391759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.392151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.392160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.392435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.392445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.392817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.392826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.393190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.393203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.393581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.393591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.393960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.393970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.394323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.394335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.394716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.394725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.395088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.395098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.395476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.395486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.395859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.395868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.396237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.396246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.396519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.396529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.396897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.396906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.412 [2024-07-22 20:46:56.397270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.412 [2024-07-22 20:46:56.397280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.412 qpair failed and we were unable to recover it. 00:39:44.413 [2024-07-22 20:46:56.397678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.413 [2024-07-22 20:46:56.397687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.413 qpair failed and we were unable to recover it. 00:39:44.413 [2024-07-22 20:46:56.398023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.413 [2024-07-22 20:46:56.398033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.413 qpair failed and we were unable to recover it. 00:39:44.413 [2024-07-22 20:46:56.398394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.413 [2024-07-22 20:46:56.398404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.413 qpair failed and we were unable to recover it. 00:39:44.413 [2024-07-22 20:46:56.398777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.413 [2024-07-22 20:46:56.398787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.413 qpair failed and we were unable to recover it. 00:39:44.413 [2024-07-22 20:46:56.399118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.413 [2024-07-22 20:46:56.399127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.413 qpair failed and we were unable to recover it. 00:39:44.413 [2024-07-22 20:46:56.399372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.413 [2024-07-22 20:46:56.399382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.413 qpair failed and we were unable to recover it. 00:39:44.413 [2024-07-22 20:46:56.399741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.413 [2024-07-22 20:46:56.399750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.413 qpair failed and we were unable to recover it. 00:39:44.413 [2024-07-22 20:46:56.400059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.413 [2024-07-22 20:46:56.400068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.413 qpair failed and we were unable to recover it. 00:39:44.413 [2024-07-22 20:46:56.400279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.413 [2024-07-22 20:46:56.400289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.413 qpair failed and we were unable to recover it. 00:39:44.413 [2024-07-22 20:46:56.400740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.413 [2024-07-22 20:46:56.400749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.413 qpair failed and we were unable to recover it. 00:39:44.413 [2024-07-22 20:46:56.401086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.413 [2024-07-22 20:46:56.401095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.413 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.401461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.401472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.401834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.401844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.402207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.402218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.402571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.402580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.402960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.402969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.403300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.403310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.403562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.403572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.403931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.403941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.404294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.404303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.404688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.404697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.405053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.405062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.405415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.405425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.405794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.405805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.406163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.406172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.406527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.406537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.406901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.406910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.407264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.407273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.407472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.407482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.407846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.407855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.408185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.408194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.408401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.408413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.408749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.689 [2024-07-22 20:46:56.408758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.689 qpair failed and we were unable to recover it. 00:39:44.689 [2024-07-22 20:46:56.408929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.408938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.409189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.409198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.409583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.409592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.409852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.409862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.410236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.410246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.410594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.410607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.410980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.410989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.411320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.411329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.411693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.411702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.412062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.412071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.412421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.412431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.412811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.412820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.413152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.413162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.413503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.413513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.413886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.413894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.414252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.414262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.414604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.414613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.414943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.414952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.415308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.415318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.415648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.415657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.415879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.415888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.416254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.416263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.416594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.416603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.416962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.416971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.417365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.417374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.417708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.417718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.418064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.418074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.418428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.418437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.418768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.418778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.418999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.419009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.419355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.419365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.419727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.419736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.420067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.420076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.420467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.420483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.690 [2024-07-22 20:46:56.420840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.690 [2024-07-22 20:46:56.420849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.690 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.421187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.421196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.421529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.421539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.421896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.421905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.422301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.422313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.422675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.422685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.422944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.422953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.423216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.423225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.423473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.423482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.423784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.423794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.424040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.424049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.424354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.424364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.424740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.424750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.425128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.425138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.425412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.425422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.425644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.425653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.426033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.426042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.426406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.426415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.426770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.426779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.427111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.427120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.427471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.427481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.427880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.427889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.428076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.428086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.428405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.428415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.428765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.428774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.429119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.429129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.429484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.429493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.429850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.429860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.430239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.430248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.430512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.430521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.430953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.430962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.431301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.431310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.431487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.431496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.431859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.431869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.432252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.432261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.432620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.432633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.433031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.433040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.433380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.691 [2024-07-22 20:46:56.433389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.691 qpair failed and we were unable to recover it. 00:39:44.691 [2024-07-22 20:46:56.433762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.433771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.434131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.434140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.434474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.434483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.434815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.434824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.435223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.435233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.435514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.435523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.435881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.435894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.436227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.436237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.436598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.436607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.436940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.436949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.437276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.437285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.437677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.437685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.438071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.438080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.438508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.438518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.438907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.438916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.439266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.439276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.439719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.439728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.440078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.440088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.440452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.440461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.440789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.440799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.441175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.441184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.441463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.441474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.441732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.441741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.442120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.442130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.442474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.442484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.442844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.442854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.443230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.443240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.443622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.443631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.443961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.443970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.444351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.444361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.444731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.444739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.445028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.445038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.445392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.445401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.445704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.445714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.446075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.446084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.446498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.446508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.446800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.692 [2024-07-22 20:46:56.446810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.692 qpair failed and we were unable to recover it. 00:39:44.692 [2024-07-22 20:46:56.447160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.447169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.447507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.447516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.447746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.447755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.448114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.448123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.448547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.448556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.448921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.448931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.449290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.449299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.449641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.449650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.450010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.450020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.450372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.450383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.450664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.450673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.451052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.451062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.451280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.451290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.451626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.451635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.451985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.451994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.452356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.452366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.452699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.452708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.453064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.453073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.453340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.453350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.453708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.453717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.454069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.454079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.454459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.454469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.454839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.454848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.455179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.455188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.455531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.455545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.455898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.455907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.456252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.456261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.456621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.456630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.456984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.456993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.457328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.457339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.457678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.457687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.458017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.458026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.458389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.458398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.458670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.458680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.459041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.459050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.459382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.459392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.459661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.459671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.693 [2024-07-22 20:46:56.460026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.693 [2024-07-22 20:46:56.460035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.693 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.460286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.460295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.460575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.460584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.460835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.460844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.461218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.461228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.461554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.461563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.461885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.461894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.462261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.462271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.462442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.462451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.462834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.462844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.463095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.463104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.463465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.463474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.463747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.463758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.464110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.464119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.464472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.464483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.464840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.464849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.465230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.465239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.465662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.465671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.466004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.466013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.466369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.466379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.466735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.466744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.467075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.467084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.467448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.467457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.467811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.467820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.468148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.468158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.468537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.468546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.468908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.468918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.469278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.469287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.469638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.469647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.470020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.470029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.470459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.470468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.694 qpair failed and we were unable to recover it. 00:39:44.694 [2024-07-22 20:46:56.470825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.694 [2024-07-22 20:46:56.470835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.471263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.471272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.471621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.471630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.471985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.471994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.472360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.472369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.472709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.472718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.473066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.473075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.473501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.473510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.473842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.473852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.474183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.474192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.474568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.474577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.474959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.474968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.475414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.475448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.475920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.475933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.476428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.476462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.476829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.476841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.477178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.477188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.477553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.477562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.477895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.477905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.478103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.478115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.478498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.478525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.478862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.478874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.479254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.479264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.479616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.479625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.479989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.479998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.480454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.480463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.480807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.480816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.481177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.481193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.481563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.481572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.481979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.481988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.482328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.482338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.482566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.482576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.483005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.483014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.483359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.483369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.483728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.483738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.484139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.484148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.484481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.695 [2024-07-22 20:46:56.484490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.695 qpair failed and we were unable to recover it. 00:39:44.695 [2024-07-22 20:46:56.484850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.484859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.485176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.485185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.485489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.485499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.485854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.485864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.486226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.486236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.486581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.486591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.486879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.486890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.487244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.487254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.487585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.487594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.487944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.487954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.488307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.488317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.488572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.488581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.488807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.488816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.489185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.489195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.489546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.489556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.489893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.489902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.490255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.490264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.490652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.490661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.490992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.491001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.491357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.491367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.491720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.491729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.492065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.492074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.492423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.492433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.492797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.492806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.493135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.493146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.493527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.493537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.493900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.493910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.494266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.494275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.494636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.494651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.495028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.495037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.495366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.495376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.495726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.495735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.496070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.496079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.496418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.496428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.496782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.496791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.497125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.497134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.497501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.497512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.497882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.696 [2024-07-22 20:46:56.497891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.696 qpair failed and we were unable to recover it. 00:39:44.696 [2024-07-22 20:46:56.498262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.498272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.498713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.498723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.498953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.498963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.499308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.499317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.499676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.499685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.499948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.499958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.500377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.500387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.500718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.500727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.501091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.501100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.501454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.501467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.501822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.501831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.502165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.502174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.502528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.502546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.502949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.502958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.503290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.503300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.503724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.503734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.504163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.504172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.504562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.504571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.504924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.504940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.505312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.505321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.505701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.505710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.506077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.506086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.506343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.506352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.506536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.506546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.506961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.506970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.507333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.507343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.507701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.507713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.507944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.507954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.508214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.508224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.508649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.508659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.509012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.509021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.509401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.509411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.509782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.509798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.510152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.510161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.510527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.510536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.510796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.510805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.511163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.511172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.697 qpair failed and we were unable to recover it. 00:39:44.697 [2024-07-22 20:46:56.511507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.697 [2024-07-22 20:46:56.511517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.511870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.511879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.512324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.512333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.512661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.512670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.513025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.513034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.513394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.513403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.513734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.513743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.514125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.514135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.514481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.514490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.514820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.514829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.515263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.515273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.515586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.515596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.515938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.515947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.516300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.516309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.516698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.516708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.517068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.517078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.517336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.517346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.517678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.517689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.518058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.518067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.518405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.518415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.518592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.518603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.519071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.519080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.519294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.519304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.519661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.519670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.520001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.520010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.520340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.520350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.520542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.520551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.520919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.520929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.521283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.521292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.521618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.521630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.521979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.521988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.522181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.522192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.698 [2024-07-22 20:46:56.522559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.698 [2024-07-22 20:46:56.522569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.698 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.522915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.522925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.523181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.523191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.523537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.523548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.523875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.523889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.524205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.524215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.524560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.524569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.524907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.524916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.525198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.525213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.525530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.525540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.525920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.525929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.526303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.526314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.526669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.526678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.527017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.527026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.527362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.527372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.527739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.527748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.528079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.528088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.528429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.528439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.528824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.528834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.529191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.529206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.529563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.529572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.529923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.529932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.530273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.530283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.530641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.530650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.530979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.530989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.531377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.531387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.531610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.531619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.531980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.531997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.532360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.532370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.532690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.532699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.533037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.533046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.533360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.533370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.533722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.533732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.534087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.534097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.534461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.534471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.534801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.534810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.535169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.535188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.535566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.699 [2024-07-22 20:46:56.535577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.699 qpair failed and we were unable to recover it. 00:39:44.699 [2024-07-22 20:46:56.535919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.535930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.536319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.536329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.536682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.536692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.537045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.537055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.537398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.537408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.537783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.537793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.538150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.538160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.538521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.538531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.538863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.538873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.539236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.539246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.539590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.539599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.539934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.539943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.540290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.540300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.540676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.540686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.541023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.541032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.541394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.541404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.541614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.541624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.541981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.541991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.542327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.542336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.542594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.542603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.542926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.542935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.543303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.543312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.543654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.543663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.544027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.544036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.544390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.544400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.544736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.544747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.545107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.545116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.545299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.545310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.545640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.545650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.545895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.545906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.546261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.546276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.546657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.546668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.547055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.547065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.547428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.547438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.547782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.547792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.548068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.548078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.548340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.548350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.548550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.700 [2024-07-22 20:46:56.548560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.700 qpair failed and we were unable to recover it. 00:39:44.700 [2024-07-22 20:46:56.548981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.548990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.549330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.549340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.549719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.549735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.550169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.550178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.550365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.550375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.550757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.550767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.551103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.551112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.551430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.551439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.551783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.551793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.552167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.552176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.552539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.552549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.552907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.552917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.553298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.553309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.553741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.553751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.554008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.554018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.554299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.554309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.554683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.554692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.555031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.555040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.555372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.555382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.555629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.555638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.556015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.556024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.556417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.556426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.556777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.556786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.557143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.557152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.557601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.557610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.557962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.557972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.558326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.558335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.558666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.558675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.559032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.559043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.559405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.559414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.559799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.559808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.560198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.560211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.560460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.560470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.560828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.560837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.561087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.561097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.561554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.561563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.561896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.561905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.701 [2024-07-22 20:46:56.562263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.701 [2024-07-22 20:46:56.562273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.701 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.562518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.562527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.562867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.562876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.563276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.563286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.563644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.563654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.564023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.564033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.564373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.564385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.564758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.564769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.565122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.565133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.565492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.565502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.565874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.565885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.566194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.566209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.566583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.566594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.566943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.566953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.567285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.567296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.567645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.567655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.568019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.568029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.568388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.568400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.568775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.568786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.569139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.569154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.569535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.569546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.569903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.569913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.570296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.570308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.570566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.570577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.570933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.570944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.571301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.571313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.571675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.571685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.572040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.572051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.572409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.572420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.572783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.572794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.573173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.573184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.573548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.573561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.573915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.573926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.574284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.574295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.574691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.574701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.702 [2024-07-22 20:46:56.575055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.702 [2024-07-22 20:46:56.575067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.702 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.575417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.575428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.575783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.575793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.576132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.576142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.576503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.576515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.576775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.576785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.577141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.577151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.577482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.577493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.577844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.577856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.578213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.578225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.578601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.578613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.578973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.578985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.579340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.579350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.579715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.579725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.580080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.580091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.580451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.580462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.580770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.580780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.581145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.581156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.581503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.581513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.703 qpair failed and we were unable to recover it. 00:39:44.703 [2024-07-22 20:46:56.581888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.703 [2024-07-22 20:46:56.581900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.582248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.582258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.582617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.582627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.582984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.582994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.583238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.583248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.583418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.583429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.583752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.583763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.584117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.584128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.584565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.584576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.585007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.585018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.585217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.585228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.585595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.585605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.585798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.585808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.586051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.586062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.586419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.586429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.586624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.586634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.586860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.586870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.587231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.587244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.587437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.587448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.587770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.587781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.588155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.588165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.588442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.588454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.588833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.588843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.589098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.589108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.589504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.704 [2024-07-22 20:46:56.589514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.704 qpair failed and we were unable to recover it. 00:39:44.704 [2024-07-22 20:46:56.589890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.589901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.590258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.590270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.590647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.590657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.591019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.591030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.591342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.591357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.591707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.591719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.592075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.592087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.592460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.592471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.592808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.592819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.593041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.593052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.593400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.593411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.593769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.593780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.594119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.594131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.594379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.594390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.594779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.594790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.595145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.595156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.595502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.595513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.595867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.595879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.596141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.596152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.596503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.596514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.596890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.596901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.597252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.597263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.597524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.597534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.597892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.597902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.598101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.705 [2024-07-22 20:46:56.598113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.705 qpair failed and we were unable to recover it. 00:39:44.705 [2024-07-22 20:46:56.598454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.598465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.598656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.598667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.598990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.599000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.599383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.599393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.599747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.599758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.600113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.600124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.600425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.600436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.600811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.600823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.601015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.601025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.601346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.601358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.601731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.601741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.601957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.601967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.602286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.602297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.602651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.602662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.603023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.603034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.603410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.603420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.603779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.603791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.604148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.604158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.604555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.604565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.604904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.604915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.605270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.605281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.605632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.605643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.605999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.606010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.606391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.606402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.706 qpair failed and we were unable to recover it. 00:39:44.706 [2024-07-22 20:46:56.606754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.706 [2024-07-22 20:46:56.606765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.607121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.607131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.607489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.607501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.607882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.607893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.608253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.608263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.608618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.608629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.608990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.609000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.609253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.609265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.609651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.609662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.610019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.610030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.610293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.610303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.610680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.610691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.611099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.611109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.611483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.611494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.611841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.611851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.612186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.612197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.612541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.612552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.612910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.612921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.613282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.613293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.613633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.613648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.614006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.614016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.614373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.614385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.614651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.614661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.615035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.615048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.707 [2024-07-22 20:46:56.615404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.707 [2024-07-22 20:46:56.615414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.707 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.615774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.615785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.616140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.616150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.616491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.616502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.616854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.616864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.617218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.617230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.617587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.617597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.617972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.617983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.618335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.618346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.618714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.618725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.619094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.619105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.619454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.619466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.619814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.619824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.620167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.620177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.620536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.620547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.620939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.620951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.621142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.621154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.621401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.621411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.621774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.621785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.621981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.621994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.622362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.622372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.622728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.622738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.623093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.623104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.623469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.623479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.623838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.623849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.708 [2024-07-22 20:46:56.624198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.708 [2024-07-22 20:46:56.624212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.708 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.624543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.624554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.624932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.624943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.625129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.625139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.625459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.625470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.625824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.625834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.626180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.626191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.626534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.626546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.626901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.626911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.627347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.627358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.627710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.627721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.628076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.628088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.628461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.628471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.628859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.628869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.629246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.629273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.629640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.629651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.630084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.630094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.630444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.630457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.630735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.630745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.631006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.631016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.631374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.631385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.631739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.631749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.632132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.632142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.632495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.632506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.632860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.632871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.633227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.709 [2024-07-22 20:46:56.633238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.709 qpair failed and we were unable to recover it. 00:39:44.709 [2024-07-22 20:46:56.633583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.633593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.633951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.633962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.634319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.634330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.634678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.634689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.635026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.635037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.635383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.635393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.635741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.635751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.636098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.636109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.636358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.636372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.636720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.636731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.637163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.637175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.637371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.637382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.637733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.637744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.638139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.638151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.638509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.638520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.638714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.638726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.639056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.639067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.639420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.639430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.639786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.639797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.710 [2024-07-22 20:46:56.640227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.710 [2024-07-22 20:46:56.640238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.710 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.640581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.640591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.640835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.640845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.641205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.641216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.641564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.641575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.641769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.641780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.642107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.642119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.642467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.642478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.642839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.642850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.643226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.643239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.643630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.643640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.643990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.644000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.644348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.644359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.644718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.644729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.645130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.645141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.645492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.645504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.645813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.645824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.646207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.646218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.646575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.646586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.646940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.646951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.647304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.647315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.647705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.647716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.648062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.648072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.648423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.648434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.648782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.648793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.649166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.649176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.649534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.649545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.649895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.711 [2024-07-22 20:46:56.649906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.711 qpair failed and we were unable to recover it. 00:39:44.711 [2024-07-22 20:46:56.650263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.650273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.650652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.650663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.651014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.651026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.651378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.651388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.651739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.651749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.652089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.652100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.652478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.652489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.652841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.652852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.653272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.653283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.653659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.653669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.654053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.654064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.654419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.654430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.654791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.654802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.655134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.655145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.655521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.655532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.655891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.655901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.656260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.656271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.656620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.656631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.656974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.656984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.657363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.657373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.657735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.657746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.658123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.658135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.658489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.658500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.658855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.658865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.659218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.659230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.659607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.659622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.659974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.659984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.660361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.660373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.660726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.660736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.661106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.661117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.712 qpair failed and we were unable to recover it. 00:39:44.712 [2024-07-22 20:46:56.661316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.712 [2024-07-22 20:46:56.661326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.661703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.661714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.662029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.662041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.662411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.662421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.662821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.662831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.663191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.663205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.663584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.663595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.663787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.663798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.664055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.664065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.664428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.664439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.664798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.664809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.664999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.665010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.665335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.665346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.665542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.665553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.665882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.665894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.666278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.666289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.666481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.666492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.666865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.666875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.667229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.667240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.667615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.667626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.667989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.668000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.668352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.668363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.668737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.668747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.669120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.669131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.669487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.669497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.669858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.669869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.670223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.670234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.670583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.670593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.670950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.670960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.671333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.671345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.671748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.671758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.672132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.672145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.672499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.672510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.672864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.672874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.673250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.673261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.673661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.673672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.674028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.674039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.674383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.713 [2024-07-22 20:46:56.674395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.713 qpair failed and we were unable to recover it. 00:39:44.713 [2024-07-22 20:46:56.674754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.674765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.675136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.675147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.675505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.675515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.675859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.675869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.676231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.676245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.676620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.676630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.677048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.677059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.677417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.677428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.677784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.677795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.678167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.678177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.678533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.678545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.678898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.678908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.679271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.679281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.679659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.679669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.680020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.680030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.680403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.680415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.680762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.680773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.681024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.681035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.681381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.681392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.681737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.681747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.681991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.682001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.682336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.682350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.682704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.682716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.683093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.683103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.683478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.683489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.683865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.683875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.684271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.684283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.684640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.684651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.685008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.685018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.685388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.685398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.685793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.685804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.686207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.686217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.686571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.686581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.686956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.686969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.687324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.687336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.687682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.687692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.688054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.688066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.714 qpair failed and we were unable to recover it. 00:39:44.714 [2024-07-22 20:46:56.688444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.714 [2024-07-22 20:46:56.688455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.688806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.688817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.689170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.689180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.689544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.689555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.689800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.689810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.690165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.690175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.690542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.690553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.690909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.690919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.691270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.691282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.691627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.691638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.691932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.691943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.692296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.692307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.692688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.692698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.693054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.693064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.693430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.693442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.693802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.693812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.694189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.694203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.694568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.694579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.694932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.694943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.695295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.695306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.695681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.695692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.696047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.696058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.696426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.696437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.696791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.696801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.697174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.697184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.697536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.697548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.697892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.697903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.698257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.698268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.698645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.698655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.699002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.699012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.699365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.699375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.715 [2024-07-22 20:46:56.699734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.715 [2024-07-22 20:46:56.699745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.715 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.700120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.700132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.700511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.700523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.700892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.700903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.701076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.701086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.701420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.701433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.701824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.701834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.702075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.702085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.702445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.702456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.702828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.702840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.703191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.703206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.703536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.703548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.703902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.703913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.704304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.704315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.704675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.704686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.705029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.705040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.705396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.705407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.705747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.705761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.706115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.706125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.706492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.706503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.706860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.706870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.707209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.707220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.988 [2024-07-22 20:46:56.707575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.988 [2024-07-22 20:46:56.707585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.988 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.707961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.707972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.708311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.708323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.708691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.708702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.708895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.708906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.709264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.709274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.709707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.709717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.710074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.710085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.710468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.710478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.710803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.710814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.711167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.711178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.711536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.711548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.711903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.711915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.712286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.712297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.712655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.712665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.713037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.713047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.713404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.713414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.713771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.713781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.714140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.714151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.714525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.714535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.714891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.714901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.715250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.715262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.715622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.715632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.715973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.715994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.716361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.716371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.716736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.716747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.717157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.717168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.717362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.717373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.717736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.717746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.718101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.718113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.718475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.718485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.718865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.718877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.719234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.719245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.719652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.719662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.720011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.720021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.720217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.720229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.720556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.720567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.720941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.989 [2024-07-22 20:46:56.720952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.989 qpair failed and we were unable to recover it. 00:39:44.989 [2024-07-22 20:46:56.721305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.721317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.721507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.721518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.721729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.721740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.722114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.722126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.722480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.722492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.722758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.722769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.723124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.723135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.723479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.723489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.723892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.723903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.724172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.724183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.724573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.724584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.724937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.724948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.725303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.725314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.725694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.725704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.726061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.726072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.726424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.726435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.726783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.726794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.727172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.727183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.727556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.727567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.727919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.727930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.728285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.728296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.728649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.728663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.729020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.729030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.729384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.729395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.729734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.729744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.730125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.730135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.730497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.730507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.730861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.730871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.731225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.731236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.731614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.731625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.731980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.731991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.732339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.732350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.732721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.732731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.733051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.733062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.733317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.733327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.733693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.733704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.734062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.734073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.990 [2024-07-22 20:46:56.734416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.990 [2024-07-22 20:46:56.734427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.990 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.734834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.734845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.735207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.735219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.735547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.735557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.735893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.735904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.736260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.736271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.736579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.736590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.736964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.736975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.737237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.737247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.737608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.737619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.737975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.737987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.738358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.738369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.738746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.738756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.739064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.739075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.739449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.739459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.739826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.739839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.740220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.740231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.740583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.740593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.740915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.740925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.741276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.741286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.741706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.741716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.742069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.742081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.742424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.742434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.742789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.742800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.743188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.743199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.743562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.743573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.743930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.743941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.744299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.744310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.744689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.744700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.745059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.745069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.745430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.745440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.745796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.745806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.746187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.746197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.746551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.746561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.746820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.746830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.747183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.747193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.747567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.747578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.747923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.747934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.991 qpair failed and we were unable to recover it. 00:39:44.991 [2024-07-22 20:46:56.748290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.991 [2024-07-22 20:46:56.748301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.748550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.748560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.748946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.748957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.749176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.749187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.749569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.749580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.749933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.749943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.750315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.750326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.750689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.750700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.751056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.751067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.751423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.751434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.751848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.751862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.752209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.752219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.752580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.752591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.752946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.752957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.753331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.753341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.753693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.753703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.754060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.754072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.754422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.754434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.754773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.754783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.754979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.754990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.755307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.755319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.755679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.755690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.756061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.756072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.756417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.756428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.756777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.756788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.757133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.757144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.757531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.757542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.757882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.757892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.758245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.758256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.758621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.758631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.758969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.758981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.759169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.759181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.759519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.759530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.759957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.759967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.992 [2024-07-22 20:46:56.760308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.992 [2024-07-22 20:46:56.760319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.992 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.760676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.760686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.760887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.760898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.761218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.761228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.761489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.761499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.761760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.761770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.762036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.762045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.762402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.762413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.762753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.762763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.763129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.763140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.763500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.763511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.763873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.763885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.764259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.764270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.764695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.764705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.764899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.764910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.765221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.765231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.765529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.765541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.765897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.765908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.766260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.766271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.766625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.766635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.767020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.767031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.767392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.767403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.767757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.767780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.768155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.768167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.768541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.768552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.768905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.768915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.769268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.769280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.769684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.769694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.770065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.770075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.770527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.770537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.770880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.770891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.771291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.771301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.771681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.771691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.772044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.772054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.993 qpair failed and we were unable to recover it. 00:39:44.993 [2024-07-22 20:46:56.772418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.993 [2024-07-22 20:46:56.772429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.772792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.772802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.773177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.773188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.773538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.773550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.773909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.773920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.774276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.774287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.774665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.774679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.775023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.775034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.775393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.775405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.775758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.775769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.776143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.776153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.776506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.776517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.776865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.776876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.777231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.777242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.777626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.777637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.777991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.778001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.778352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.778363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.778723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.778733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.779077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.779088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.779427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.779437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.779791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.779802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.780149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.780160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.780495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.780506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.780866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.780877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.781277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.781287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.781655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.781665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.782038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.782049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.782403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.782414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.782621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.782631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.782996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.783008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.783398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.783409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.783666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.783676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.784051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.784062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.784413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.784424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.784797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.784807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.785155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.785166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.785520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.785530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.785885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.994 [2024-07-22 20:46:56.785897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.994 qpair failed and we were unable to recover it. 00:39:44.994 [2024-07-22 20:46:56.786321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.786331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.786669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.786680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.786946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.786957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.787317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.787329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.787686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.787696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.788050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.788061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.788424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.788436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.788787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.788797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.789141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.789153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.789511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.789522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.789745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.789755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.790108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.790118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.790472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.790484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.790835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.790846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.791206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.791216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.791569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.791579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.791954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.791965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.792320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.792330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.792689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.792700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.793052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.793062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.793437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.793448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.793706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.793717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.794069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.794079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.794539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.794550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.794824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.794834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.795187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.795197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.795578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.795588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.795960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.795971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.796455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.796490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.796850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.796863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.797121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.797132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.797478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.797493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.797884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.797899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.798265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.798276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.798620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.798632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.798984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.798995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.799370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.799381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.799735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.995 [2024-07-22 20:46:56.799746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.995 qpair failed and we were unable to recover it. 00:39:44.995 [2024-07-22 20:46:56.800140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.800151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.800578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.800590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.800927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.800938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.801292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.801304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.801501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.801513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.801874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.801885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.802216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.802227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.802576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.802586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.802777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.802788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.803175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.803186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.803567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.803578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.803930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.803943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.804301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.804312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.804668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.804678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.805053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.805064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.805421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.805431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.805785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.805795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.806119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.806130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.806394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.806405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.806764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.806775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.807126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.807137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.807458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.807469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.807716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.807726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.808079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.808090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.808465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.808476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.808813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.808824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.809199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.809213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.809573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.809583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.809938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.809949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.810141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.810152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.810469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.810479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.810835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.810845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.811275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.811285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.811645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.811657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.812033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.812045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.812403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.812413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.812771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.812782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.813138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.996 [2024-07-22 20:46:56.813148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.996 qpair failed and we were unable to recover it. 00:39:44.996 [2024-07-22 20:46:56.813499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.813509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.813862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.813873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.814225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.814237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.814621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.814634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.815014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.815025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.815383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.815393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.815754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.815765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.816040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.816051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.816426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.816437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.816793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.816805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.817160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.817170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.817526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.817538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.817912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.817922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.818278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.818290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.818653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.818664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.819019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.819030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.819256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.819266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.819497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.819509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.819871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.819883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.820235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.820246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.820628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.820642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.821023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.821033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.821439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.821449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.821805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.821815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.822157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.822167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.822529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.822540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.822892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.822903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.823268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.823278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.823598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.823610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.823970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.823981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.824337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.824348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.824692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.997 [2024-07-22 20:46:56.824703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.997 qpair failed and we were unable to recover it. 00:39:44.997 [2024-07-22 20:46:56.825039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.825050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.825396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.825407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.825751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.825762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.825998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.826010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.826398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.826409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.826769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.826780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.827139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.827150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.827508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.827519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.827891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.827903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.828263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.828274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.828599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.828610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.828965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.828976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.829351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.829362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.829724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.829735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.830089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.830099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.830473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.830485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.830858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.830869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.831216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.831228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.831582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.831592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.831950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.831961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.832347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.832358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.832756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.832767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.833164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.833175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.833521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.833532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.833895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.833905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.834260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.834270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.834633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.834644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.834998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.835009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.835369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.835379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.835742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.835752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.836110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.836121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.836481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.836491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.836856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.836868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.837151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.837161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.837519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.837530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.837929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.837940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.838316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.838327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.998 [2024-07-22 20:46:56.838698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.998 [2024-07-22 20:46:56.838709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.998 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.839152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.839163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.839534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.839545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.839920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.839932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.840286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.840297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.840707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.840718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.841067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.841081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.841431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.841443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.841800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.841811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.842034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.842044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.842402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.842414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.842628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.842638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.843010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.843022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.843391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.843401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.843763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.843774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.844030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.844055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.844411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.844423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.844616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.844626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.845006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.845017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.845363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.845374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.845625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.845635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.845998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.846010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.846218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.846230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.846607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.846618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.846815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.846826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.846992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.847003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.847287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.847298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.847634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.847646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.848019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.848030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.848225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.848236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.848434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.848444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.848634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.848645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.849019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.849030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.849250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.849261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.849565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.849575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.849931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.849941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.850130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.850141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.850443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.850454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:44.999 qpair failed and we were unable to recover it. 00:39:44.999 [2024-07-22 20:46:56.850813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.999 [2024-07-22 20:46:56.850824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.851203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.851214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.851606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.851617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.851976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.851987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.852370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.852382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.852810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.852820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.853169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.853180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.853530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.853541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.853900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.853913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.854240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.854251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.854664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.854675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.855036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.855047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.855397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.855407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.855786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.855797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.856152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.856163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.856516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.856526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.856788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.856797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.857058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.857069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.857423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.857433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.857800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.857811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.858162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.858172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.858517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.858527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.858883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.858894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.859249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.859260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.859619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.859630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.859874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.859884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.860237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.860248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.860608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.860619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.860964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.860975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.861358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.861368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.861739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.861749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.862124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.862135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.862486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.862498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.862886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.862896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.863255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.863266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.863645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.863656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.864007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.000 [2024-07-22 20:46:56.864019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.000 qpair failed and we were unable to recover it. 00:39:45.000 [2024-07-22 20:46:56.864401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.864412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.864765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.864775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.865131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.865142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.865334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.865345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.865706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.865720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.866072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.866083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.866436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.866447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.866802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.866813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.867185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.867196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.867576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.867587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.867934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.867945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.868308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.868320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.868709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.868720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.869075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.869085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.869468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.869479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.869831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.869842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.870221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.870232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.870480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.870490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.870846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.870857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.871217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.871228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.871601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.871612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.871966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.871976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.872338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.872349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.872703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.872713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.873094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.873104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.873477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.873488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.873840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.873851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.874206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.874217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.874538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.874549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.874812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.874822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.875232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.875242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.875597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.875609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.875988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.875998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.876351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.876362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.001 [2024-07-22 20:46:56.876603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.001 [2024-07-22 20:46:56.876614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.001 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.876969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.876980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.877357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.877367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.877715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.877726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.878082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.878093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.878505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.878516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.878891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.878902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.879304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.879315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.879681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.879691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.880044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.880055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.880467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.880478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.880669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.880681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.880956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.880967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.881273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.881284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.881638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.881648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.881994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.882005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.882228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.882239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.882608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.882621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.882812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.882822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.883196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.883218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.883590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.883601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.883953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.883963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.884341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.884352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.884710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.884720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.885078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.885089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.885446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.885457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.885832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.885844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.886196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.886214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.886613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.886623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.886982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.886993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.887318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.887329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.887684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.002 [2024-07-22 20:46:56.887695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.002 qpair failed and we were unable to recover it. 00:39:45.002 [2024-07-22 20:46:56.888050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.888060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.888420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.888432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.888853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.888867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.889212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.889223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.889577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.889587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.889940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.889951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.890324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.890335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.890699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.890710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.891047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.891058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.891411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.891422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.891768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.891778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.892123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.892134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.892480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.892491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.892888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.892899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.893272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.893283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.893637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.893648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.894003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.894015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.894366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.894377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.894748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.894758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.895112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.895123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.895552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.895562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.895914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.895926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.896300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.896311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.896665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.896675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.896844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.896855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.897237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.897251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.897590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.897601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.897958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.897969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.898162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.898173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.898548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.898559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.898939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.898949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.899307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.899318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.899652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.899663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.900017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.900027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.003 [2024-07-22 20:46:56.900246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.003 [2024-07-22 20:46:56.900257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.003 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.900628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.900638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.901034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.901044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.901401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.901412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.901789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.901799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.902155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.902167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.902531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.902541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.902927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.902938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.903313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.903324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.903677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.903690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.904042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.904053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.904411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.904422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.904810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.904821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.905176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.905189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.905551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.905561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.905914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.905924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.906305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.906316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.906667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.906679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.907034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.907045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.907398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.907409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.907759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.907770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.908125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.908135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.908489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.908500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.908853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.908864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.909247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.909258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.909612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.909623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.909977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.909987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.910234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.910245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.910598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.910609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.910969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.910979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.911342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.911353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.911709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.911719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.912099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.912115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.912491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.912502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.912869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.912879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.913241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.913252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.913610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.004 [2024-07-22 20:46:56.913621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.004 qpair failed and we were unable to recover it. 00:39:45.004 [2024-07-22 20:46:56.913893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.913903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.914165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.914176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.914523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.914534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.914906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.914918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.915272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.915283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.915638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.915648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.916007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.916018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.916398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.916409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.916774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.916785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.917099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.917110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.917479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.917489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.917854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.917864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.918219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.918230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.918599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.918610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.918960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.918971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.919345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.919356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.919711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.919721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.920074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.920084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.920425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.920437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.920780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.920790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.921145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.921156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.921509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.921521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.921879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.921890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.922266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.922277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.922472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.922483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.922860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.922871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.923224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.923235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.923608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.923618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.923971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.923982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.924330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.924340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.924694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.005 [2024-07-22 20:46:56.924704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.005 qpair failed and we were unable to recover it. 00:39:45.005 [2024-07-22 20:46:56.925088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.925099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.925471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.925482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.925911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.925921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.926267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.926279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.926661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.926671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.927025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.927036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.927392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.927403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.927754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.927766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.928109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.928120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.928483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.928494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.928850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.928861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.929251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.929261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.929612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.929623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.929976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.929987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.930341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.930352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.930706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.930717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.931052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.931062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.931411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.931423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.931771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.931782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.932132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.932143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.932477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.932488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.932842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.932852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.933207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.933217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.933554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.933564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.933940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.933950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.934303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.934314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.934678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.934689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.935106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.935117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.935459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.935475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.935822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.935833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.936192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.936209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.936542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.936552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.936938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.936948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.937305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.937317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.937659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.937671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.938023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.938033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.006 [2024-07-22 20:46:56.938412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.006 [2024-07-22 20:46:56.938423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.006 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.938775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.938785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.939099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.939110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.939487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.939498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.939750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.939760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.940098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.940106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.940465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.940474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.940829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.940838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.941216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.941226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.941579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.941588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.941978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.941987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.942343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.942352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.942704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.942713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.943066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.943075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.943364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.943375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.943735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.943746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.944125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.944135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.944490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.944502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.944857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.944869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.945256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.945266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.945626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.945636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.945992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.946003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.946361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.946372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.946730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.946741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.947113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.947124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.947483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.947495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.947854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.947864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.948219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.948231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.948618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.948630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.948984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.007 [2024-07-22 20:46:56.948996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.007 qpair failed and we were unable to recover it. 00:39:45.007 [2024-07-22 20:46:56.949344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.949355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.949708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.949720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.950095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.950106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.950472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.950484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.950840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.950853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.951227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.951238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.951626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.951638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.952078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.952090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.952464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.952475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.952824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.952835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.953217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.953229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.953583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.953594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.953962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.953974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.954360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.954372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.954684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.954695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.954961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.954973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.955315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.955326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.955689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.955700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.956086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.956096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.956356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.956367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.956817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.956827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.957182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.957192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.957565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.957576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.957932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.957943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.958292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.958302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.958667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.958678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.959054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.959069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.959416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.959426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.959779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.959789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.960091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.960102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.960450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.960461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.960866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.960877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.961293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.961303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.961507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.961519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.961878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.008 [2024-07-22 20:46:56.961888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.008 qpair failed and we were unable to recover it. 00:39:45.008 [2024-07-22 20:46:56.962078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.962088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.962414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.962425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.962852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.962863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.963284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.963295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.963659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.963670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.964019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.964029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.964382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.964393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.964774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.964784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.965138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.965149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.965508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.965521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.965883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.965895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.966274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.966285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.966644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.966655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.967001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.967011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.967365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.967376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.967741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.967752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.968106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.968116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.968464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.968475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.968871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.968883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.969238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.969249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.969603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.969614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.969969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.969981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.970352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.970362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.970715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.970726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.971079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.971090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.971474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.971485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.971872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.971882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.972255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.972266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.972618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.972628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.972981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.972992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.973339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.973349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.973723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.973733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.974087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.974097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.974478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.974490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.974847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.974857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.975185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.975196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.975569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.009 [2024-07-22 20:46:56.975580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.009 qpair failed and we were unable to recover it. 00:39:45.009 [2024-07-22 20:46:56.975932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.975942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.976374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.976384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.976728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.976740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.977099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.977110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.977302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.977313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.977491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.977502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.977841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.977853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.978203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.978215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.978543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.978554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.978731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.978741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.978986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.978996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.979226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.979237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.979619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.979633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.979990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.980001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.980377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.980388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.980743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.980754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.980908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.980919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.981292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.981302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.981674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.981688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.982057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.982068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.982427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.982438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.982792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.982803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.983141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.983152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.983514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.983525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.983883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.983894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.984248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.984259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.984596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.984607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.984980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.984990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.985348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.985359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.985735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.985745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.986076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.986087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.986452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.986463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.986864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.986874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.987226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.987237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.987579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.987589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.987943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.987953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.988300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.010 [2024-07-22 20:46:56.988312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.010 qpair failed and we were unable to recover it. 00:39:45.010 [2024-07-22 20:46:56.988663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.988674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.989054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.989065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.989423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.989433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.989791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.989802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.990155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.990167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.990550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.990561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.990915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.990926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.991283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.991294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.991649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.991660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.992031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.992042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.992402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.992412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.992775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.992786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.993143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.993154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.993484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.993496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.993751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.993762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.994115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.994128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.994437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.994448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.994813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.994824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.995177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.995188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.995544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.995555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.995917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.995927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.996274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.996286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.996644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.996654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.011 [2024-07-22 20:46:56.997016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.011 [2024-07-22 20:46:56.997027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.011 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:56.997395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:56.997408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:56.997788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:56.997801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:56.998161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:56.998172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:56.998544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:56.998555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:56.998921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:56.998933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:56.999284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:56.999295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:56.999658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:56.999669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.000016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.000027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.000376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.000387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.000763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.000773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.001120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.001131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.001477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.001488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.001845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.001856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.002235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.002245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.002599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.002610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.003004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.003014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.003375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.003387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.003802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.003812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.004209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.004220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.292 qpair failed and we were unable to recover it. 00:39:45.292 [2024-07-22 20:46:57.004575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.292 [2024-07-22 20:46:57.004585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.004939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.004950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.005301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.005316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.005677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.005687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.006043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.006054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.006407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.006418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.006801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.006811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.007165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.007175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.007372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.007383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.007555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.007566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.007932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.007943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.008302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.008313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.008679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.008691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.009117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.009127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.009469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.009480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.009830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.009841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.010193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.010207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.010531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.010542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.010956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.010967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.011314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.011325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.011688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.011699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.012133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.012144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.012559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.012570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.012923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.012935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.013297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.013308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.013665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.013676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.014100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.014110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.014426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.014437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.014795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.014805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.015059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.015068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.015411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.015422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.015776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.015787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.016138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.016149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.016508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.016519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.016886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.016897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.017253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.017264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.017656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.017666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.293 [2024-07-22 20:46:57.018013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.293 [2024-07-22 20:46:57.018024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.293 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.018372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.018383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.018630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.018641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.018844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.018856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.019226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.019237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.019573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.019583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.019935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.019945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.020297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.020308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.020661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.020672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.021051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.021062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.021471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.021481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.021836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.021846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.022209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.022221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.022548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.022559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.022755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.022765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.023145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.023157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.023509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.023520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.023903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.023914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.024287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.024297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.024650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.024661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.025019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.025030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.025353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.025364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.025718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.025728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.026083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.026093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.026473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.026484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.026866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.026876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.027244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.027255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.027563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.027573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.027952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.027963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.028280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.028296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.028656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.028666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.029024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.029035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.029383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.029394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.029747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.029757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.030111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.030122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.030480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.030490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.030839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.030851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.031227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.294 [2024-07-22 20:46:57.031238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.294 qpair failed and we were unable to recover it. 00:39:45.294 [2024-07-22 20:46:57.031592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.031603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.031958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.031969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.032323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.032334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.032667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.032677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.033022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.033034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.033293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.033303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.033656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.033667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.034043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.034054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.034511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.034522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.034882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.034893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.035246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.035257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.035649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.035660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.035995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.036006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.036352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.036364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.036726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.036736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.037113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.037124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.037477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.037489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.037843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.037855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.038209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.038220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.038574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.038584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.038936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.038947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.039292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.039304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.039678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.039688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.040061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.040071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.040505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.040516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.040824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.040836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.041214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.041225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.041600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.041610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.041956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.041967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.042320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.042332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.042689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.042700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.043067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.043078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.043423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.043435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.043828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.043838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.044055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.044066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.044414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.044428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.044802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.044813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.295 [2024-07-22 20:46:57.045207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.295 [2024-07-22 20:46:57.045218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.295 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.045579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.045589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.045900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.045911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.046271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.046282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.046637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.046647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.047000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.047010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.047377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.047388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.047745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.047756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.048016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.048026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.048376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.048387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.048583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.048595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.048961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.048972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.049327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.049338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.049555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.049566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.049957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.049967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.050317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.050328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.050699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.050709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.051069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.051081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.051463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.051478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.051836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.051847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.052207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.052221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.052582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.052593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.052974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.052985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.053394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.053405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.053751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.053762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.054117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.054128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.054492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.054503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.054859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.054871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.055227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.055238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.055610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.055621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.056003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.056014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.056375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.056387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.056795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.056807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.057160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.057171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.057518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.057530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.057884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.057895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.058252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.058264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.296 [2024-07-22 20:46:57.058593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.296 [2024-07-22 20:46:57.058604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.296 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.058974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.058985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.059208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.059224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.059621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.059632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.059855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.059867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.060090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.060101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.060427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.060439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.060793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.060805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.061159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.061170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.061564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.061575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.061936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.061947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.062295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.062306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.062655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.062667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.063049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.063061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.063344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.063355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.063726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.063737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.064088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.064100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.064461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.064472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.064830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.064841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.065195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.065210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.065542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.065553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.065900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.065911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.066296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.066308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.066769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.066782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.067132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.067144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.067471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.067482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.067708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.067719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.068093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.068105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.068549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.068560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.068932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.068943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.069292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.069303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.297 qpair failed and we were unable to recover it. 00:39:45.297 [2024-07-22 20:46:57.069658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.297 [2024-07-22 20:46:57.069669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.070057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.070068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.070208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.070219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.070451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.070462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.070711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.070721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.071092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.071103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.071443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.071454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.071809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.071820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.072181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.072193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.072539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.072550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.072916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.072928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.073282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.073293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.073660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.073671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.074030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.074041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.074397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.074412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.074620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.074632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.074983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.074994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.075350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.075361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.075691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.075702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.076053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.076064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.076413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.076424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.076770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.076780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.077133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.077144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.077501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.077512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.077869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.077880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.078226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.078237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.078576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.078587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.078943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.078953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.079303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.079314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.079693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.079703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.080081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.080091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.080401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.080412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.080791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.080804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.081157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.081168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.081620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.081631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.081982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.081993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.082364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.082375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.082739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.298 [2024-07-22 20:46:57.082749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.298 qpair failed and we were unable to recover it. 00:39:45.298 [2024-07-22 20:46:57.083001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.083011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.083367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.083378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.083688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.083699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.084072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.084083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.084450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.084461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.084704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.084714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.085028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.085039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.085405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.085416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.085790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.085801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.086035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.086045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.086277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.086287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.086648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.086659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.086996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.087008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.087316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.087326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.087699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.087709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.087899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.087910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.088235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.088246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.088602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.088613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.088967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.088977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.089334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.089347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.089768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.089779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.090135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.090146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.090317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.090328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.090678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.090689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.090911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.090921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.091298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.091308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.091666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.091676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.092031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.092042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.092417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.092428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.092769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.092780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.093130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.093142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.093493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.093504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.093882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.093893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.094248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.094258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.094612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.094625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.094973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.094984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.095357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.095369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.095726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.299 [2024-07-22 20:46:57.095737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.299 qpair failed and we were unable to recover it. 00:39:45.299 [2024-07-22 20:46:57.096093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.096104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.096455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.096466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.096842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.096856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.097211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.097222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.097606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.097616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.098017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.098028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.098376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.098387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.098785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.098795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.099141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.099152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.099376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.099386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.099753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.099765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.100117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.100128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.100489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.100502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.100849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.100860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.101275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.101286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.101545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.101555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.101908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.101919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.102263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.102274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.102665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.102677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.103031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.103041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.103395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.103406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.103760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.103771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.104147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.104158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.104519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.104532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.104887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.104898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.105271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.105282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.105670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.105680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.106032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.106042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.106441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.106452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.106809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.106820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.107156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.107168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.107523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.107534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.107932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.107944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.108299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.108309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.108681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.108692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.109035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.109046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.109358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.109369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.300 qpair failed and we were unable to recover it. 00:39:45.300 [2024-07-22 20:46:57.109660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.300 [2024-07-22 20:46:57.109670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.110135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.110145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.110502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.110512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.110886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.110897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.111251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.111262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.111638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.111648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.111816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.111827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.112183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.112193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.112573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.112583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.112937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.112948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.113323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.113335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.113689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.113699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.114053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.114064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.114423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.114434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.114812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.114823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.115175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.115186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.115534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.115544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.115898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.115909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.116234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.116244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.116630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.116641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.117006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.117017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.117363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.117375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.117748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.117760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.118108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.118118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.118435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.118446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.118799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.118810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.119197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.119213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.119568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.119580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.119933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.119944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.120299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.120314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.120687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.120698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.301 [2024-07-22 20:46:57.121045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.301 [2024-07-22 20:46:57.121056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.301 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.121414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.121425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.121783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.121795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.122172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.122183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.122530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.122541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.122895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.122906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.123217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.123229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.123585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.123596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.123991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.124002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.124361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.124373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.124727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.124738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.125066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.125077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.125447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.125459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.125805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.125817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.126172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.126183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.126548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.126560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.126914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.126925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.127282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.127293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.127650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.127661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.128031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.128043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.128388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.128399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.128753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.128763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.129119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.129130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.129505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.129516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.129873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.129884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.130237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.130248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.130607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.130617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.130991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.131002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.131336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.131347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.131704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.131714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.131910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.131921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.132278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.132289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.132646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.132657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.133014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.133025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.302 [2024-07-22 20:46:57.133372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.302 [2024-07-22 20:46:57.133383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.302 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.133739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.133752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.134099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.134110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.134494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.134505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.134860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.134872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.135246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.135258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.135701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.135712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.136066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.136078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.136425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.136436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.136844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.136855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.137210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.137221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.137581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.137591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.137988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.137998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.138365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.138376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.138749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.138760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.139116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.139126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.139368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.139378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.139750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.139760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.140114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.140125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.140483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.140494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.140841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.140853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.141229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.141241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.141576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.141587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.141932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.141943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.142285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.142296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.303 qpair failed and we were unable to recover it. 00:39:45.303 [2024-07-22 20:46:57.142670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.303 [2024-07-22 20:46:57.142681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.143038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.143048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.143403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.143415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.143776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.143790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.144122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.144133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.144327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.144339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.144510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.144521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.144884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.144895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.145114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.145125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.145482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.145492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.145924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.145935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.146281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.146292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.146672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.146683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.147043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.147053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.147416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.147428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.147782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.147792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.148179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.148193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.148390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.148401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.148656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.148666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.149025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.149035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.149408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.149420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.149808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.149818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.150179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.150190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.150559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.150570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.150949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.304 [2024-07-22 20:46:57.150960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.304 qpair failed and we were unable to recover it. 00:39:45.304 [2024-07-22 20:46:57.151315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.151333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.151697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.151709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.152062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.152072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.152454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.152466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.152826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.152837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.153202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.153213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.153570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.153581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.153956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.153967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.154416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.154450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.154814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.154828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.155194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.155211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.155579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.155590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.155948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.155959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.156408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.156443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.156828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.156840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.157221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.157232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.157591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.157602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.157964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.157975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.158364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.158375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.158760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.158771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.159131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.159143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.159491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.159502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.159854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.159865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.160120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.160131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.160497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.305 [2024-07-22 20:46:57.160508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.305 qpair failed and we were unable to recover it. 00:39:45.305 [2024-07-22 20:46:57.160863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.160874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.161227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.161239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.161616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.161627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.161984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.161995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.162363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.162373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.162729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.162741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.163112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.163125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.163482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.163493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.163847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.163859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.164213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.164225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.164609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.164620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.164979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.164991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.165372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.165384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.165742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.165753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.166135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.166147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.166504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.166515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.166867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.166879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.167102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.167118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.167496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.167507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.167861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.167871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.168227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.168239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.168598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.168608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.168917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.168929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.169285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.169296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.306 [2024-07-22 20:46:57.169652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.306 [2024-07-22 20:46:57.169662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.306 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.170015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.170024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.170371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.170380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.170624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.170632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.170985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.170994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.171350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.171359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.171736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.171745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.172101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.172110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.172487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.172496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.172851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.172860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.173247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.173258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.173613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.173624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.173827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.173839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.174225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.174236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.174580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.174592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.174782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.174794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.175105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.175117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.175464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.175476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.175851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.175863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.176218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.176229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.176426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.176438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.176795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.176807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.177183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.177197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.177581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.177592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.177947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.307 [2024-07-22 20:46:57.177958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.307 qpair failed and we were unable to recover it. 00:39:45.307 [2024-07-22 20:46:57.178315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.178327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.178679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.178691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.179047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.179058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.179484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.179496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.179749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.179760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.180087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.180098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.180481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.180493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.180846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.180857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.181212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.181224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.181578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.181589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.181834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.181846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.182207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.182219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.182555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.182566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.182942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.182954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.183318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.183329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.183705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.183715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.184069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.184079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.184438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.308 [2024-07-22 20:46:57.184450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.308 qpair failed and we were unable to recover it. 00:39:45.308 [2024-07-22 20:46:57.184822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.184832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.185209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.185221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.185660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.185671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.186052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.186062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.186523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.186558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.186931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.186945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.187353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.187364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.187705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.187715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.188145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.188155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.188504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.188515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.188871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.188882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.189209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.189220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.189398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.189411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.189780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.189790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.190009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.190025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.190332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.190343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.190697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.190709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.190956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.190967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.191358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.191370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.191752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.191765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.191985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.191995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.192357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.192368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.192740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.192751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.193116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.193128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.193556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.193567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.309 [2024-07-22 20:46:57.193780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.309 [2024-07-22 20:46:57.193790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.309 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.194159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.194170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.194527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.194538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.194895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.194906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.195179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.195190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.195568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.195579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.195961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.195973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.196354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.196365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.196742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.196753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.197102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.197114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.197373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.197384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.197734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.197746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.198098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.198109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.198475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.198486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.198747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.198758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.199146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.199157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.199552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.199563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.199871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.199882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.200243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.200254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.200618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.200629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.200850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.200861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.201212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.201223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.201456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.201466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.201793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.201803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.202192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.202207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.310 [2024-07-22 20:46:57.202552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.310 [2024-07-22 20:46:57.202564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.310 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.202956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.202966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.203341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.203352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.203720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.203731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.204119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.204130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.204396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.204408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.204780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.204791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.205147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.205158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.205498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.205510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.205858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.205873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.206255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.206266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.206632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.206644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.206978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.206988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.207321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.207332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.207690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.207700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.208097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.208108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.208334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.208344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.208749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.208758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.209142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.209152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.209396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.209406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.209765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.209775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.210131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.210142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.210496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.210507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.210863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.210874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.211228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.211239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.211629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.211640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.211980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.211991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.212370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.212382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.311 [2024-07-22 20:46:57.212757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.311 [2024-07-22 20:46:57.212787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.311 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.213152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.213164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.213537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.213548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.213922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.213933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.214317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.214328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.214690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.214702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.215078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.215089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.215536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.215547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.215925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.215936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.216342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.216353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.216733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.216743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.217093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.217104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.217356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.217366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.217630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.217640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.217900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.217911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.218262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.218273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.218696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.218707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.218954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.218965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.219307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.219319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.219732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.219743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.220098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.220110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.220448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.220460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.220836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.220847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.221205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.221215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.221587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.221598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.221961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.221972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.222350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.222361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.222710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.222722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.222919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.222930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.223192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.223212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.223558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.223569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.223945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.223955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.224322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.224332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.224699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.224710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.225090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.312 [2024-07-22 20:46:57.225101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.312 qpair failed and we were unable to recover it. 00:39:45.312 [2024-07-22 20:46:57.225523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.225534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.225888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.225899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.226255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.226266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.226673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.226685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.227034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.227046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.227235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.227246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.227677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.227690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.228044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.228055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.228270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.228282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.228665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.228676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.229043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.229054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.229445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.229456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.229791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.229802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.230229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.230241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.230578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.230589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.230964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.230974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.231324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.231336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.231562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.231571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.231927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.231937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.232282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.232294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.232667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.232677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.232913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.232923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.233276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.233286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.233635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.233645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.234001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.234012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.234391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.234402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.234748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.234761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.235145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.235155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.235524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.235535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.235910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.235925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.313 [2024-07-22 20:46:57.236186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-07-22 20:46:57.236196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.313 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.236618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.236629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.236849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.236860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.237226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.237237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.237615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.237625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.238004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.238014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.238395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.238405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.238764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.238775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.239112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.239124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.239485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.239495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.239847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.239857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.240220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.240231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.240465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.240475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.240856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.240866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.241222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.241233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.241527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.241537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.241900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.241910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.242258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.242269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.314 [2024-07-22 20:46:57.242544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-07-22 20:46:57.242555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.314 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.242915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.242926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.243288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.243300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.243677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.243688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.244051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.244062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.244414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.244426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.244784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.244794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.245137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.245148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.245505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.245517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.245858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.245869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.246217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.246228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.246569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.246579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.246939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.246949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.247306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.247317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.247689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.247701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.248067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.248078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.248327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.248338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.248705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.248715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.249073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.249084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.249435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.249447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.249811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.249821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.250191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.250205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.250564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.250575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.250926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.250938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.251293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.251304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.315 qpair failed and we were unable to recover it. 00:39:45.315 [2024-07-22 20:46:57.251659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-07-22 20:46:57.251670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.252010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.252021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.252281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.252292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.252676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.252687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.253044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.253054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.253411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.253422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.253765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.253775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.254136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.254146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.254499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.254510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.254819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.254829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.255161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.255172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.255571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.255583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.255937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.255949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.256309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.256320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.256686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.256697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.257043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.257054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.257411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.257422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.257769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.257780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.258155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.258166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.258518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.258530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.258885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.258899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.259257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.259267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.259630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.259642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.260003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.260013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.260374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.260386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.260764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.316 [2024-07-22 20:46:57.260775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.316 qpair failed and we were unable to recover it. 00:39:45.316 [2024-07-22 20:46:57.261159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.261170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.261524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.261535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.261970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.261981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.262345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.262356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.262739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.262750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.263110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.263121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.263294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.263306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.263474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.263488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.263837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.263847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.264023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.264035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.264424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.264436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.264629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.264640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.264958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.264969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.265231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.265241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.265611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.265622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.265981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.265993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.266359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.266370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.266726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.266737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.267170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.267181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.267565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.267576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.267953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.267965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.268316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.268327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.268700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.268712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.269015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.269027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.317 [2024-07-22 20:46:57.269383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.317 [2024-07-22 20:46:57.269394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.317 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.269741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.269751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.270142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.270154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.270505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.270515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.270744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.270755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.271109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.271121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.271471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.271482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.271797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.271808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.272143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.272154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.272524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.272536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.272895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.272907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.273264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.273276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.273612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.273623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.273978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.273989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.274346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.274357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.274618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.274629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.275010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.275022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.275380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.275391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.275756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.275768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.276144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.276156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.276479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.276492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.276847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.276858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.277302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.277314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.277649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.277662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.318 [2024-07-22 20:46:57.277968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.318 [2024-07-22 20:46:57.277980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.318 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.278359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.278371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.278763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.278775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.279134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.279145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.279520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.279531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.279890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.279902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.280260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.280271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.280652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.280663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.280991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.281002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.281377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.281388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.281788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.281804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.282157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.282168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.282538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.282548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.282902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.282913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.283311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.283321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.283698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.283709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.283996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.284007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.284381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.284392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.284721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.284732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.285086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.285097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.285461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.285473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.285710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.285721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.286060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.286070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.286431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.286443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.286820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.286832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.319 [2024-07-22 20:46:57.287230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.319 [2024-07-22 20:46:57.287241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.319 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.287607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.287620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.287977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.287987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.288400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.288412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.288748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.288760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.289075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.289086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.289427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.289440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.289785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.289796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.290157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.290167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.290541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.290552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.290899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.290909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.291246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.291257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.291500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.291510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.291760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.291771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.292116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.292126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.292485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.292496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.320 [2024-07-22 20:46:57.292850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.320 [2024-07-22 20:46:57.292860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.320 qpair failed and we were unable to recover it. 00:39:45.609 [2024-07-22 20:46:57.293214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.609 [2024-07-22 20:46:57.293227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.609 qpair failed and we were unable to recover it. 00:39:45.609 [2024-07-22 20:46:57.293582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.609 [2024-07-22 20:46:57.293593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.609 qpair failed and we were unable to recover it. 00:39:45.609 [2024-07-22 20:46:57.293973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.609 [2024-07-22 20:46:57.293983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.609 qpair failed and we were unable to recover it. 00:39:45.609 [2024-07-22 20:46:57.294367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.609 [2024-07-22 20:46:57.294379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.609 qpair failed and we were unable to recover it. 00:39:45.609 [2024-07-22 20:46:57.294750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.609 [2024-07-22 20:46:57.294760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.609 qpair failed and we were unable to recover it. 00:39:45.609 [2024-07-22 20:46:57.294994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.609 [2024-07-22 20:46:57.295004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.609 qpair failed and we were unable to recover it. 00:39:45.609 [2024-07-22 20:46:57.295384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.609 [2024-07-22 20:46:57.295395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.609 qpair failed and we were unable to recover it. 00:39:45.609 [2024-07-22 20:46:57.295750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.609 [2024-07-22 20:46:57.295761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.609 qpair failed and we were unable to recover it. 00:39:45.609 [2024-07-22 20:46:57.296073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.609 [2024-07-22 20:46:57.296086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.609 qpair failed and we were unable to recover it. 00:39:45.609 [2024-07-22 20:46:57.296473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.609 [2024-07-22 20:46:57.296484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.296852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.296863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.297221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.297232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.297573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.297583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.297942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.297953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.298332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.298343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.298703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.298714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.299077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.299088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.299476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.299488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.299866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.299877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.300318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.300329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.300657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.300668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.301036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.301047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.301273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.301284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.301658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.301668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.302024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.302037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.302396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.302408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.302757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.302768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.303122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.303133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.303494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.303505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.304341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.304363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.304732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.304745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.305041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.305052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.305412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.305428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.305803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.305815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.306193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.306208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.306600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.306611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.306975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.306986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.307307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.307319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.307561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.307571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.307880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.307891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.308198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.308214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.308394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.308406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.308772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.308783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.309032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.309042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.309401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.309422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.309796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.309806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.610 qpair failed and we were unable to recover it. 00:39:45.610 [2024-07-22 20:46:57.310157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.610 [2024-07-22 20:46:57.310168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.310370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.310381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.310759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.310770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.311129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.311140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.311493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.311504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.311886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.311897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.312278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.312290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.312480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.312491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.312870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.312882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.313461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.313481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.313876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.313888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.314248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.314259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.314610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.314622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.314978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.314990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.315365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.315376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.315736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.315748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.316102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.316113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.316467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.316479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.316853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.316866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.317232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.317244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.317469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.317481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.317845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.317855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.318238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.318249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.318615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.318627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.318980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.318991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.319366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.319378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.319759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.319770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.320143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.320155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.320546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.320558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.320918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.320929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.321304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.321315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.321688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.321698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.322065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.322076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.322414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.322424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.322701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.322711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.322957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.322968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.323336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.611 [2024-07-22 20:46:57.323347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.611 qpair failed and we were unable to recover it. 00:39:45.611 [2024-07-22 20:46:57.323677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.323688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.324039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.324050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.324416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.324426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.324826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.324837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.325210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.325220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.325459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.325469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.325829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.325839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.326112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.326123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.326492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.326503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.326868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.326879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.327235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.327246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.327605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.327616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.327876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.327887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.328275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.328290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.328538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.328550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.328793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.328803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.329143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.329153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.329513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.329525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.329679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.329689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.329901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.329912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.330225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.330236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.330573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.330586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.330940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.330951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.331303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.331314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.331573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.331583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.331927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.331938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.332291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.332302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.332626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.332637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.333007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.333019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.333363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.333374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.333741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.333753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.334142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.334153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.334595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.334606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.334847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.334859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.335259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.335269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.335652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.335665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.336040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.336051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.612 qpair failed and we were unable to recover it. 00:39:45.612 [2024-07-22 20:46:57.336418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.612 [2024-07-22 20:46:57.336429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.336782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.336793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.337141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.337151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.337503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.337514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.337867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.337878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.338243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.338254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.338684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.338695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.338980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.338991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.339362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.339374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.339749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.339761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.340111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.340122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.340457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.340468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.340824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.340835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.341212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.341223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.341911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.341931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.342294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.342306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.342675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.342687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.343045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.343056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.343406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.343417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.343783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.343793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.344146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.344157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.344511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.344523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.344902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.344913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.345286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.345296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.345652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.345666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.346035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.346045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.346423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.346434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.346795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.346806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.347161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.347172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.347576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.347587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.347789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.347800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.348153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.348164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.348412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.348422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.348766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.348777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.349092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.349103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.349522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.349533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.349956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.349967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.613 [2024-07-22 20:46:57.350300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.613 [2024-07-22 20:46:57.350311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.613 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.350695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.350706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.350926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.350938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.351377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.351391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.351572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.351583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.351974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.351985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.352246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.352256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.352622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.352633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.352937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.352948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.353283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.353293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.353623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.353634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.353818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.353829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.354077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.354088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.354369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.354379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.354688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.354700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.355051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.355062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.355420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.355431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.355736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.355747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.356108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.356118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.356292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.356304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.356541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.356554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.356897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.356907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.357251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.357262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.357628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.357638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.358004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.358014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.358431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.358441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.358825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.358836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.359265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.359285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.359645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.359657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.360033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.360043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.360452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.614 [2024-07-22 20:46:57.360464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.614 qpair failed and we were unable to recover it. 00:39:45.614 [2024-07-22 20:46:57.360819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.360830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.361071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.361081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.361460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.361470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.361821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.361832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.362190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.362203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.362476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.362487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.362865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.362876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.363283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.363294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.363662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.363673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.364023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.364034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.364432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.364444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.364800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.364810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.365068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.365078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.365447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.365459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.365697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.365707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.366052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.366062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.366370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.366382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.366719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.366729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.367073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.367083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.367474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.367485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.367843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.367853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.368217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.368227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.368622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.368632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.368992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.369002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.369378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.369388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.369663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.369673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.370026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.370037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.370420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.370430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.370775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.370787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.371147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.371158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.371513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.371525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.371930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.371941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.372304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.372314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.372581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.372592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.372966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.372977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.373399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.373410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.373713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.373730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.373983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.615 [2024-07-22 20:46:57.373994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.615 qpair failed and we were unable to recover it. 00:39:45.615 [2024-07-22 20:46:57.374357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.374368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.374746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.374757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.375118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.375129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.375400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.375410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.375793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.375803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.376236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.376247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.376691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.376701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.377062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.377074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.377353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.377363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.377750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.377761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.378116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.378127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.378474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.378486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.378870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.378882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.379248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.379259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.379615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.379627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.380011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.380022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.380298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.380309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.380693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.380704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.381078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.381089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.381493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.381504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.381879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.381891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.382264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.382274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.382648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.382659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.383019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.383030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.383468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.383479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.383859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.383870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.384229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.384241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.384513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.384524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.384905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.384915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.385282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.385293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.385624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.385636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.385992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.386003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.386371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.386381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.386742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.386752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.387102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.387114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.387296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.387307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.387633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.387644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.616 qpair failed and we were unable to recover it. 00:39:45.616 [2024-07-22 20:46:57.388008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.616 [2024-07-22 20:46:57.388019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.388217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.388230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.388600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.388611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.388987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.388998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.389372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.389383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.389756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.389767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.390207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.390219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.390539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.390549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.390925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.390935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.390994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.391005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.391327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.391338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.391583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.391594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.391954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.391964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.392313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.392325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.392698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.392709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.393011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.393021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.393400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.393411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.393664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.393675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.394051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.394062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.394505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.394516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.394859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.394870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.395254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.395265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.395630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.395641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.395995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.396005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.396457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.396470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.396812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.396827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.397182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.397193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.397580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.397590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.397962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.397973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.398558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.398593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.398961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.398974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.399423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.399458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.399839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.399853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.400209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.400221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.400473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.400483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.400824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.400835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.401219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.617 [2024-07-22 20:46:57.401231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.617 qpair failed and we were unable to recover it. 00:39:45.617 [2024-07-22 20:46:57.401620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.401632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.401999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.402011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.402363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.402375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.402730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.402741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.403090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.403105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.403467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.403480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.403838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.403849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.404224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.404236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.404611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.404623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.405024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.405036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.405394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.405406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.405630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.405641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.406002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.406014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.406414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.406425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.406788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.406799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.407013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.407024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.407384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.407395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.407760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.407771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.408130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.408142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.408507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.408518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.408725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.408736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.409107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.409119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.409412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.409423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.409802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.409814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.410203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.410215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.410572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.410583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.410937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.410949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.411324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.411335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.411534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.411546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.411791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.411805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.412162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.412173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.412408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.412420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.412782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.412793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.413197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.413212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.413666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.413678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.413901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.413913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.414292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.414304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.618 qpair failed and we were unable to recover it. 00:39:45.618 [2024-07-22 20:46:57.414566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.618 [2024-07-22 20:46:57.414577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.414925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.414936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.415274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.415286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.415627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.415638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.415994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.416006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.416364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.416376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.416639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.416650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.416920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.416933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.417315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.417327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.417689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.417700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.418077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.418088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.418427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.418439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.418795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.418806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.419168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.419180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.419619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.419631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.419966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.419980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.420334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.420346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.420707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.420718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.421094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.421105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.421455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.421465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.421813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.421823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.422173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.422184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.422549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.422560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.422981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.422992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.423339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.423351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.423762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.423772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.424110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.424121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.424478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.424489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.424871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.424883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.425267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.425278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.425665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.425675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.426052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.426063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.426417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.619 [2024-07-22 20:46:57.426427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.619 qpair failed and we were unable to recover it. 00:39:45.619 [2024-07-22 20:46:57.426790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.426801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.427149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.427160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.427506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.427517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.427894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.427905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.428260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.428272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.428661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.428672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.429045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.429056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.429256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.429270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.429540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.429550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.429931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.429942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.430324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.430334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.430704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.430716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.430951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.430963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.431333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.431345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.431779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.431792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.432140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.432151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.432342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.432354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.432587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.432598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.432965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.432976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.433366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.433376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.433574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.433584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.433975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.433986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.434334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.434345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.434702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.434714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.434961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.434970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.435163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.435176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.435563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.435573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.435955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.435966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.436319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.436331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.436722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.436733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.437085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.437096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.437479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.437490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.437846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.437857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.438241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.438252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.438600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.438611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.438869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.438879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.439234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.439245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.439591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.620 [2024-07-22 20:46:57.439602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.620 qpair failed and we were unable to recover it. 00:39:45.620 [2024-07-22 20:46:57.439953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.439964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.440322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.440333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.440719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.440729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.441109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.441120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.441465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.441477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.441737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.441747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.442106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.442117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.442464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.442476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.442851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.442865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.443083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.443093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.443512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.443523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.443901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.443911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.444289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.444300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.444658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.444670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.445027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.445039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.445419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.445430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.445806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.445819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.446177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.446187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.446440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.446451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.446779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.446790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.447179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.447191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.447550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.447560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.447909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.447920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.448263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.448273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.448661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.448672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.448893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.448903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.449259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.449270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.449660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.449671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.450047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.450057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.450414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.450425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.450781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.450793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.451169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.451179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.451555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.451566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.451840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.451850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.452171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.452182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.452575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.452586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.452966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.621 [2024-07-22 20:46:57.452977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.621 qpair failed and we were unable to recover it. 00:39:45.621 [2024-07-22 20:46:57.453329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.453340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.453639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.453650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.453975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.453986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.454367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.454377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.454731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.454742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.455090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.455101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.455450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.455461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.455803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.455815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.456216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.456228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.456592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.456603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.456977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.456988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.457364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.457377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.457733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.457744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.458100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.458111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.458205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.458216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.458509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.458520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.458829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.458840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.459219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.459230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.459625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.459635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.460009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.460025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.460402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.460414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.460768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.460779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.461027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.461037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.461332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.461343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.461722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.461733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.462103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.462113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.462477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.462489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.462841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.462853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.463211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.463223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.463553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.463563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.463839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.463850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.464206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.464217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.464572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.464583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.464946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.464957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.465335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.465346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.465721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.465735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.466090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.466101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.622 qpair failed and we were unable to recover it. 00:39:45.622 [2024-07-22 20:46:57.466397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.622 [2024-07-22 20:46:57.466407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.466778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.466788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.467145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.467156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.467527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.467539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.467775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.467785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.468113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.468125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.468347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.468359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.468723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.468735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.469082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.469092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.469407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.469420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.469775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.469785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.470030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.470040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.470490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.470500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.470839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.470850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.471212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.471224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.471584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.471595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.471987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.471998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.472194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.472208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.472584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.472594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.472948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.472958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.473415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.473450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.473839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.473852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.474214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.474226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.474549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.474560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.474914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.474926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.475266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.475277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.475635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.475647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.476001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.476011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.476371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.476382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.476735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.476746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.476971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.476982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.477366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.623 [2024-07-22 20:46:57.477377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.623 qpair failed and we were unable to recover it. 00:39:45.623 [2024-07-22 20:46:57.477735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.477746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.478124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.478135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.478498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.478508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.478867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.478878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.479239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.479251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.479456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.479468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.479841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.479852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.480207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.480219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.480492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.480503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.480860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.480871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.481230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.481242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.481616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.481626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.481973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.481983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.482360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.482372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.482720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.482731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.483084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.483094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.483473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.483484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.483864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.483876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.484241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.484252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.484624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.484636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.484995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.485006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.485389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.485400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.485760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.485770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.486129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.486140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.486502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.486513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.486894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.486907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.487258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.487269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.487618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.487628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.487979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.487989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.488358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.488369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.488724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.488740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.489098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.489110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.489464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.489475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.489844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.489855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.490217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.490228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.490589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.490599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.490909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.490920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.624 [2024-07-22 20:46:57.491290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.624 [2024-07-22 20:46:57.491301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.624 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.491658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.491670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.491871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.491881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.492252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.492263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.492641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.492653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.493004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.493014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.493367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.493379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.493742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.493753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.494149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.494160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.494532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.494543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.494945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.494957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.495318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.495328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.495543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.495553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.495806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.495817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.496205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.496215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.496572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.496583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.496963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.496974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.497398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.497409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.497755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.497766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.498076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.498086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.498418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.498432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.498789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.498799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.499147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.499159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.499509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.499520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.499896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.499906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.500265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.500275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.500625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.500635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.500853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.500862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.501228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.501239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.501576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.501586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.501946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.501957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.502324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.502334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.502687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.502698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.503057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.503083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.503425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.503437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.503799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.503809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.504190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.504206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.504579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.504590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.625 qpair failed and we were unable to recover it. 00:39:45.625 [2024-07-22 20:46:57.504947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.625 [2024-07-22 20:46:57.504958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.505314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.505325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.505699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.505709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.506056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.506067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.506415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.506426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.506824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.506835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.507168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.507179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.507530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.507541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.507897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.507908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.508268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.508278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.508732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.508742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.509090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.509100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.509477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.509488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.509844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.509854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.510232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.510243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.510604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.510615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.510972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.510983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.511347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.511357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.511708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.511720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.512080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.512094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.512471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.512482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.512855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.512866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.513238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.513251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.513610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.513622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.513981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.513992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.514367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.514378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.514747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.514758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.515111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.515123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.515498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.515509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.515730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.515741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.516123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.516135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.516496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.516508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.516855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.516866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.517217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.517229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.517602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.517613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.517968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.517979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.518411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.626 [2024-07-22 20:46:57.518423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.626 qpair failed and we were unable to recover it. 00:39:45.626 [2024-07-22 20:46:57.518771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.518782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.519159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.519170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.519553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.519565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.519927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.519938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.520134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.520146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.520504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.520515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.520860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.520870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.521224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.521236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.521459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.521469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.521807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.521818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.522172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.522183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.522595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.522606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.522958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.522969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.523302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.523313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.523692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.523703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.524079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.524091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.524469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.524480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.524861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.524872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.525091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.525102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.525487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.525498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.525844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.525854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.526235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.526245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.526535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.526545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.526903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.526913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.527272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.527284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.527673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.527686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.528043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.528053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.528410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.528421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.528776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.528788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.529163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.529174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.529521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.529532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.529878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.529889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.530238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.530248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.530571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.530582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.530952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.530963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.531340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.531353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.531717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.627 [2024-07-22 20:46:57.531728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.627 qpair failed and we were unable to recover it. 00:39:45.627 [2024-07-22 20:46:57.532102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.532113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.532412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.532423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.532773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.532784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.533140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.533150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.533592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.533603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.533910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.533921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.534297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.534308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.534678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.534689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.535100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.535111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.535530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.535545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.535905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.535916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.536307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.536318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.536686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.536698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.537062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.537073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.537503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.537514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.537864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.537875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.538249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.538260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.538618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.538629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.538983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.538994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.539346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.539356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.539709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.539719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.540082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.540092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.540471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.540482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.540782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.540792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.541059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.541070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.541421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.541433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.541794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.541804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.542031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.542041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.542418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.542431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.542588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.542602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.542970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.542981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.543345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.628 [2024-07-22 20:46:57.543357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.628 qpair failed and we were unable to recover it. 00:39:45.628 [2024-07-22 20:46:57.543741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.543752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.544124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.544135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.544337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.544347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.544764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.544775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.545184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.545195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.545541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.545552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.545909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.545920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.546180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.546190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.546559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.546570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.546884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.546896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.547128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.547138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.547495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.547506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.547884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.547895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.548292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.548303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.548663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.548674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.549031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.549043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.549419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.549429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.549637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.549647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.549972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.549982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.550340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.550351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.550714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.550725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.551076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.551087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.551446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.551458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.551809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.551819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.552018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.552028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.552501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.552512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.552858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.552868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.553225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.553236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.629 qpair failed and we were unable to recover it. 00:39:45.629 [2024-07-22 20:46:57.553619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.629 [2024-07-22 20:46:57.553630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.553985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.553995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.554356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.554368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.554741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.554751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.554994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.555005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.555365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.555376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.555723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.555735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.556092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.556102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.556477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.556490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.556848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.556859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.557214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.557225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.557462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.557473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.557850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.557862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.558215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.558230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.558567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.558578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.558925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.558936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.559693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.559715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.560071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.560084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.560776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.560796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.561040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.561052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.561871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.561890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.562536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.562559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.562919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.562931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.563271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.563282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.563503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.563514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.563928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.563939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.564295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.564307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.564669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.564680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.565015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.565026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.565377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.565388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.565742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.565753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.566111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.566123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.566401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.566412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.566770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.566782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.567649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.567669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.568032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.568044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.568415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.568426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.568834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.630 [2024-07-22 20:46:57.568844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.630 qpair failed and we were unable to recover it. 00:39:45.630 [2024-07-22 20:46:57.569194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.569209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.569529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.569539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.569902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.569913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.570111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.570121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.570350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.570360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.570727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.570737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.571111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.571123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.571481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.571493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.571849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.571860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.572304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.572315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.572643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.572666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.573021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.573033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.573388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.573399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.573595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.573606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.573843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.573855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.574214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.574226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.574550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.574562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.574917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.574928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.575307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.575320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.575693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.575704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.576059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.576071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.576426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.576438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.576709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.576720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.577085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.577098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.577466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.577477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.577830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.577841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.578218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.578230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.578586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.578596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.578953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.578965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.579224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.579235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.579615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.579626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.631 [2024-07-22 20:46:57.579980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.631 [2024-07-22 20:46:57.579990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.631 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.580337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.580348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.580712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.580723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.581102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.581114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.581487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.581498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.581758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.581768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.581992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.582003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.582390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.582402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.582786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.582813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.583177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.583189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.583462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.583473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.583849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.583860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.584234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.584244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.584608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.584619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.584769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.584779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.585154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.585165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.585574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.585586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.585932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.585944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.586302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.586312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.586670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.586683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.586952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.586963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.587318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.587329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.587570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.587580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.587957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.587969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.588326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.588338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.588714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.588725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.588919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.588930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.589305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.589315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.589685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.589696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.590056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.590068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.590415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.590427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.590777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.590787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.591116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.591127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.591485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.591497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.591854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.591866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.632 [2024-07-22 20:46:57.592250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.632 [2024-07-22 20:46:57.592261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.632 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.592664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.592675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.592841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.592850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.593216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.593228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.593569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.593580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.593931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.593942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.594291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.594303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.594668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.594679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.595060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.595071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.595421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.595432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.595800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.595812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.596214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.596224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.596500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.596511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.596884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.596894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.597232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.597245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.597582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.597593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.597970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.597981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.598331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.598344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.598716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.598727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.599083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.599095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.599457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.599469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.599818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.599830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.600187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.600197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.600463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.600474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.600831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.600845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.633 qpair failed and we were unable to recover it. 00:39:45.633 [2024-07-22 20:46:57.601208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.633 [2024-07-22 20:46:57.601220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.601551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.601562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.601917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.601928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.602302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.602313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.602661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.602672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.603030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.603041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.603261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.603273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.603661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.603672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.604029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.604039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.604396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.604407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.604763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.604775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.605123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.605134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.605463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.605478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.605837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.605848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.606109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.606119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.606296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.606316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.606493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.606503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.606822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.606834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.607195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.607210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.607528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.607539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.607904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.607915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.608314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.608325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.608719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.608729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.609104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.609115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.609324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.609335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.609635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.609645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.610019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.610029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.610408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.610419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.610776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.610788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.611147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.611158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.611505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.611517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.611870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.611881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.612234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.612246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.612621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.612632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.612985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.612997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.613358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.613369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.634 [2024-07-22 20:46:57.613730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.634 [2024-07-22 20:46:57.613742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.634 qpair failed and we were unable to recover it. 00:39:45.635 [2024-07-22 20:46:57.614098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.635 [2024-07-22 20:46:57.614110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.635 qpair failed and we were unable to recover it. 00:39:45.635 [2024-07-22 20:46:57.614491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.635 [2024-07-22 20:46:57.614502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.635 qpair failed and we were unable to recover it. 00:39:45.635 [2024-07-22 20:46:57.614698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.635 [2024-07-22 20:46:57.614711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.635 qpair failed and we were unable to recover it. 00:39:45.635 [2024-07-22 20:46:57.615127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.635 [2024-07-22 20:46:57.615138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.635 qpair failed and we were unable to recover it. 00:39:45.635 [2024-07-22 20:46:57.615482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.635 [2024-07-22 20:46:57.615492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.635 qpair failed and we were unable to recover it. 00:39:45.635 [2024-07-22 20:46:57.615782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.635 [2024-07-22 20:46:57.615793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.635 qpair failed and we were unable to recover it. 00:39:45.635 [2024-07-22 20:46:57.616168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.635 [2024-07-22 20:46:57.616179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.635 qpair failed and we were unable to recover it. 00:39:45.635 [2024-07-22 20:46:57.616526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.635 [2024-07-22 20:46:57.616538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.635 qpair failed and we were unable to recover it. 00:39:45.635 [2024-07-22 20:46:57.616894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.635 [2024-07-22 20:46:57.616905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.635 qpair failed and we were unable to recover it. 00:39:45.635 [2024-07-22 20:46:57.617263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.635 [2024-07-22 20:46:57.617274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.635 qpair failed and we were unable to recover it. 00:39:45.907 [2024-07-22 20:46:57.617668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.907 [2024-07-22 20:46:57.617680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.907 qpair failed and we were unable to recover it. 00:39:45.907 [2024-07-22 20:46:57.618033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.907 [2024-07-22 20:46:57.618044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.907 qpair failed and we were unable to recover it. 00:39:45.907 [2024-07-22 20:46:57.618394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.907 [2024-07-22 20:46:57.618406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.907 qpair failed and we were unable to recover it. 00:39:45.907 [2024-07-22 20:46:57.618757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.907 [2024-07-22 20:46:57.618768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.907 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.619145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.619155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.619504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.619516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.619875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.619886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.620240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.620253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.620475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.620487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.620892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.620904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.621267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.621277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.621635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.621646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.621984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.621995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.622350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.622361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.622715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.622726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.623131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.623143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.623511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.623523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.623750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.623762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.624116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.624128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.624528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.624540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.624919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.624931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.625292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.625303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.625612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.625623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.625928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.625938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.626320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.626331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.626753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.626764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.627119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.627129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.627480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.627491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.627866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.627878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.628234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.628248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.628627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.628639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.629053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.629064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.629510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.629523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.629715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.629727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.630057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.630068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.630291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.630303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.630679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.630689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.631063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.631074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.631514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.631525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.908 [2024-07-22 20:46:57.631873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.908 [2024-07-22 20:46:57.631884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.908 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.632234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.632245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.632608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.632619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.633055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.633066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.633410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.633421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.633796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.633806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.634161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.634172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.634525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.634536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.634729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.634740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.635074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.635084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.635424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.635436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.635790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.635801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.636103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.636114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.636401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.636412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.636578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.636589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.636919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.636930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.637285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.637296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.637566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.637576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.637773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.637784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.638118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.638128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.638513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.638524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.638898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.638909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.639263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.639282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.639638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.639649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.640005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.640016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.640392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.640403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.640760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.640771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.641075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.641086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.641440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.641451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.641826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.641837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.642193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.642211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.642557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.642567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.642911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.909 [2024-07-22 20:46:57.642922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.909 qpair failed and we were unable to recover it. 00:39:45.909 [2024-07-22 20:46:57.643295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.643307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.643672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.643684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.644039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.644050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.644406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.644416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.644804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.644814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.645169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.645179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.645504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.645514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.645870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.645881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.646253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.646264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.646587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.646597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.646953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.646963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.647322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.647334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.647686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.647697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.648059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.648070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.648410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.648422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.648778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.648789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.649165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.649176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.649522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.649533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.649887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.649898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.650265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.650275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.650664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.650675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.650901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.650917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.651321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.651332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.651698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.651708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.652061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.652071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.652306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.652318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.652700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.652712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.652974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.652984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.653336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.653347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.653711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.653721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.654109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.654119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.654478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.654490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.654881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.654891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.655207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.655219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.655434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.655445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.655809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.655820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.656195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.910 [2024-07-22 20:46:57.656217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.910 qpair failed and we were unable to recover it. 00:39:45.910 [2024-07-22 20:46:57.656574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.656586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.656940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.656950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.657305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.657316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.657689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.657699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.658057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.658067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.658428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.658439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.658795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.658807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.659162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.659173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.659370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.659382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.659754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.659765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.660119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.660130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.660356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.660368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.660716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.660727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.661082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.661093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.661471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.661483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.661869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.661880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.662227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.662239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.662570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.662580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.662926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.662937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.663368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.663379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.663728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.663740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.664094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.664106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.664482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.664493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.664874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.664884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.665345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.665356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.665706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.665717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.666066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.666076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.666432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.666443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.666796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.666806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.667166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.667177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.667525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.667537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.667912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.667923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.668280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.668290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.911 [2024-07-22 20:46:57.668660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.911 [2024-07-22 20:46:57.668670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.911 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.669021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.669031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.669269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.669279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.669636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.669647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.670005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.670015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.670413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.670425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.670800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.670810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.671166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.671178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.671546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.671556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.671756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.671766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.672008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.672020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.672385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.672395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.672797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.672808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.673160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.673171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.673558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.673569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.673928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.673942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.674294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.674306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.674511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.674522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.674899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.674911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.675268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.675278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.675633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.675643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.676001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.676011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.676396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.676406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.676804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.676815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.677164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.677175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.677532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.677543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.677884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.677895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.678251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.678261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.678657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.678667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.679029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.679039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.912 [2024-07-22 20:46:57.679416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.912 [2024-07-22 20:46:57.679426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.912 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.679816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.679826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.680181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.680191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.680541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.680551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.680924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.680935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.681336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.681348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.681691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.681703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.682057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.682070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.682423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.682434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.682674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.682685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.683073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.683084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.683461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.683472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.683847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.683858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.684212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.684224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.684590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.684600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.684958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.684969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.685362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.685373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.685735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.685745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.686099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.686111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.686481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.686492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.686868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.686879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.687241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.687253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.687617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.687628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.687983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.687994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.688371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.688383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.688737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.688748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.689103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.689114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.689485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.689496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.689873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.689884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.690246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.690256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.690619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.690630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.690987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.913 [2024-07-22 20:46:57.690997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.913 qpair failed and we were unable to recover it. 00:39:45.913 [2024-07-22 20:46:57.691382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.691393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.691754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.691764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.692120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.692131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.692494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.692505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.692883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.692894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.693251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.693262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.693615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.693626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.693982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.693993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.694358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.694370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.694725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.694736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.695134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.695145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.695521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.695532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.695793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.695803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.696168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.696178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.696537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.696548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.696895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.696908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.697129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.697141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.697387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.697402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.697716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.697727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.697955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.697966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.698309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.698320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.698673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.698684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.699041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.699051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.699247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.699258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.699656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.699667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.700030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.700041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.700396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.700408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.700790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.700800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.701177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.701188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.701410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.701420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.701775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.701786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.702144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.702154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.702499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.702511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.702867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.702878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.703233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.914 [2024-07-22 20:46:57.703243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.914 qpair failed and we were unable to recover it. 00:39:45.914 [2024-07-22 20:46:57.703577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.703587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.703993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.704003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.704359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.704371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.704724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.704736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.705091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.705102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.705447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.705458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.705821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.705833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.706189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.706203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.706456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.706466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.706826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.706836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.707196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.707210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.707548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.707558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.707907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.707917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.708291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.708302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.708665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.708676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.709028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.709039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.709405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.709416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.709792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.709803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.710165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.710175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.710523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.710534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.710932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.710945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.711321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.711333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.711712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.711722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.712076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.712087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.712309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.712320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.712701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.712711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.713066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.713076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.713426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.713438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.713794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.713804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.714179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.714190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.714541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.714553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.714907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.714917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.715276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.715287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.715518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.715528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.715730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.715742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.915 [2024-07-22 20:46:57.716103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.915 [2024-07-22 20:46:57.716113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.915 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.716487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.716497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.716843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.716854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.717208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.717218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.717554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.717565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.717758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.717769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.718095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.718106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.718294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.718305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.718552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.718562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.718785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.718795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.719171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.719182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.719540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.719555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.719914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.719925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.720240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.720251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.720591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.720602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.720874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.720885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.720961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.720971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.721311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.721322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.721687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.721698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.722070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.722081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.722459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.722471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.722782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.722792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.723009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.723019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.723389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.723399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.723752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.723763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.724152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.724166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.724366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.724376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.724741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.724752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.725111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.725122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.725406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.725417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.725772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.725783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.726161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.726173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.726529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.726541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.726905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.916 [2024-07-22 20:46:57.726918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.916 qpair failed and we were unable to recover it. 00:39:45.916 [2024-07-22 20:46:57.727270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.727281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.727623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.727634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.727988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.727998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.728357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.728368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.728614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.728624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.729007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.729018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.729238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.729249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.729588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.729599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.729956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.729967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.730349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.730360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.730714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.730725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.731101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.731113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.731482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.731493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.731868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.731878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.732235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.732245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.732610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.732624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.732978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.732988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.733312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.733324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.733699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.733710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.734063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.734074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.734421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.734433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.734692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.734703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.735056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.735066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.735336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.735346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.735706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.735717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.736093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.736103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.736450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.736461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.736816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.736827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.737181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.737191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.737577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.737589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.737852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.917 [2024-07-22 20:46:57.737863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.917 qpair failed and we were unable to recover it. 00:39:45.917 [2024-07-22 20:46:57.738102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.738115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.738460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.738470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.738690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.738700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.738895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.738907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.739224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.739234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.739608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.739619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.740028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.740039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.740388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.740399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.740818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.740829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.741182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.741194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.741554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.741569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.741939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.741951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.742304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.742315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.742643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.742654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.743034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.743044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.743395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.743405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.743760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.743772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.743991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.744002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.744375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.744386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.744743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.744755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.745108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.745118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.745535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.745546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.745922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.745933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.746335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.746346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.746694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.746704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.746953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.746964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.747347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.747358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.747793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.747803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.748151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.748162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.748549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.748559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.748779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.748790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.749193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.749207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.749546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.749556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.749903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.749914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.750286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.750297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.750676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.750686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.751030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.918 [2024-07-22 20:46:57.751042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.918 qpair failed and we were unable to recover it. 00:39:45.918 [2024-07-22 20:46:57.751400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.751411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.751823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.751834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.752182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.752193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.752570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.752583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.752937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.752947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.753427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.753461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.753827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.753841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.754280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.754291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.754618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.754629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.755011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.755021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.755388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.755399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.755759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.755770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.756126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.756138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.756484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.756495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.756851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.756862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.757082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.757093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.757461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.757472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.757886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.757898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.758259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.758270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.758632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.758644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.758998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.759009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.759393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.759404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.759599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.759610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.759985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.759996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.760354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.760365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.760729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.760741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.761095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.761105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.761368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.761379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.761734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.761744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.762136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.762147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.762509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.762521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.762878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.762889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.763246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.763256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.763654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.763665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.764012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.764023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.764384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.919 [2024-07-22 20:46:57.764395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.919 qpair failed and we were unable to recover it. 00:39:45.919 [2024-07-22 20:46:57.764759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.764771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.765144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.765159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.765507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.765519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.765870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.765880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.766233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.766244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.766589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.766599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.766956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.766967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.767323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.767337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.767684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.767695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.767913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.767923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.768280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.768292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.768639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.768649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.768868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.768880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.769255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.769266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.769645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.769656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.770014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.770025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.770392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.770403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.770783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.770794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.771155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.771166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.771515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.771525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.771881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.771893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.772273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.772284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.772553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.772563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.772920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.772931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.773287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.773297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.773667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.773677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.774033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.774044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.774403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.774414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.774768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.774779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.775121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.775132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.775538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.775549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.775906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.775917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.776281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.776292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.776690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.776700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.776991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.777003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.777358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.777369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.920 qpair failed and we were unable to recover it. 00:39:45.920 [2024-07-22 20:46:57.777720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.920 [2024-07-22 20:46:57.777730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.778052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.778063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.778270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.778281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.778666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.778680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.779038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.779049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.779397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.779408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.779810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.779821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.780162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.780172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.780519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.780531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.780911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.780921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.781328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.781339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.781589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.781600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.781960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.781970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.782363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.782374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.782744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.782755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.783110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.783122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.783317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.783331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.783602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.783613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.783966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.783978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.784333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.784344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.784709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.784720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.785099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.785110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.785491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.785503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.785860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.785871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.786225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.786237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.786568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.786578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.786940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.786952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.787308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.787320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.787685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.787696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.787942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.787957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.788316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.788327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.788679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.788690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.789053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.789064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.789419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.789430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.789779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.789791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.790140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.790151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.921 [2024-07-22 20:46:57.790510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.921 [2024-07-22 20:46:57.790522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.921 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.790907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.790918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.791179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.791191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.791541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.791553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.791909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.791920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.792230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.792241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.792590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.792601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.792955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.792966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.793321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.793332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.793703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.793713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.794066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.794077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.794456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.794467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.794825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.794835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.795246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.795257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.795605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.795617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.795969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.795983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.796361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.796372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.796755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.796766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.797213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.797225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.797610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.797622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.798004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.798015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.798531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.798566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.798764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.798777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.799099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.799110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.799498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.799509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.799890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.799901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.800263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.800274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.800639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.800650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.801009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.801021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.801386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.801397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.801581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.922 [2024-07-22 20:46:57.801591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.922 qpair failed and we were unable to recover it. 00:39:45.922 [2024-07-22 20:46:57.801848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.801859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.802223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.802234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.802588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.802599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.802950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.802963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.803310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.803321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.803520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.803530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.803807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.803818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.804174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.804185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.804548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.804559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.804911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.804922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.805121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.805131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.805494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.805505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.805859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.805869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.806176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.806187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.806569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.806579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.806966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.806978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.807327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.807339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.807624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.807636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.807971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.807982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.808365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.808377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.808755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.808766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.809123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.809134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.809496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.809506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.809865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.809877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.810252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.810265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.810648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.810659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.811042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.811057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.811425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.811436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.811783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.811794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.812017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.812028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.812407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.812418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.813107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.813128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.813505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.813517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.814440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.814463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.814841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.814855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.815601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.815621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.923 [2024-07-22 20:46:57.815979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.923 [2024-07-22 20:46:57.815991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.923 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.816347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.816358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.816716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.816726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.817082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.817094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.817473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.817484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.817859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.817870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.818235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.818246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.818609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.818619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.818973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.818983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.819232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.819242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.819595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.819607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.820008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.820019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.820345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.820357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.820558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.820568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.820908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.820919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.821292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.821303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.821662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.821673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.822028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.822040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.822422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.822433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.822740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.822751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.823107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.823117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.823497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.823508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.823886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.823897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.824257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.824269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.824495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.824505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.824664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.824676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.825022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.825033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.825395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.825409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.825760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.825772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.826208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.826220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.826387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.826399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.826642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.826653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.827013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.827025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.827372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.827383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.827761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.827773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.828131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.828142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.924 [2024-07-22 20:46:57.828497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.924 [2024-07-22 20:46:57.828507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.924 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.828862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.828872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.829094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.829105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.829473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.829484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.829839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.829850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.830250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.830261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.830653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.830664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.831020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.831030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.831385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.831396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.831752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.831762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.832182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.832193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.832555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.832566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.832924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.832935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.833292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.833303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.833701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.833713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.834066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.834077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.834465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.834477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.834825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.834837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.835099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.835114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.835532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.835543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.835893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.835905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.836261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.836272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.836545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.836555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.836913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.836923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.837282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.837294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.837662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.837672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.838053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.838065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.838415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.838426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.838782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.838792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.839155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.839165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.839555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.839566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.839925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.839936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.840369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.840379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.840800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.840811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.841033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.841044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.841282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.841300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.841545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.925 [2024-07-22 20:46:57.841556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.925 qpair failed and we were unable to recover it. 00:39:45.925 [2024-07-22 20:46:57.841760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.841772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.842108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.842119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.842483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.842495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.842850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.842861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.843212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.843222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.843462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.843472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.843710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.843721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.844063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.844074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.844465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.844476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.844696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.844706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.845050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.845062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.845421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.845431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.845652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.845662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.846049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.846059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.846423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.846435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.846846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.846856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.847082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.847092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.847340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.847352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.847610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.847621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.847939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.847950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.848303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.848315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.848713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.848724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.849080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.849094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.849325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.849336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.849702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.849714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.850059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.850071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.850430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.850441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.850801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.850813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.851168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.851179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.851557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.851567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.851921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.851933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.852289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.852300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.852672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.852683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.853063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.853073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.853433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.853445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.853694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.926 [2024-07-22 20:46:57.853704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.926 qpair failed and we were unable to recover it. 00:39:45.926 [2024-07-22 20:46:57.854053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.854065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.854445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.854457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.854813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.854823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.855172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.855183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.855537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.855548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.855927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.855938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.856294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.856305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.856556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.856566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.856913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.856924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.857127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.857140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.857344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.857355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.857698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.857708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.858060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.858071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.858211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.858223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.858561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.858573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.858798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.858808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.859134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.859145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.859409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.859420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.859636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.859646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.859854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.859865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.860213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.860224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.860611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.860622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.860978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.860990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.861351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.861362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.861711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.861722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.861917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.861929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.862173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.862184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.862405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.862419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.862664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.862674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.863101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.863112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.863476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.863487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.863841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.927 [2024-07-22 20:46:57.863852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.927 qpair failed and we were unable to recover it. 00:39:45.927 [2024-07-22 20:46:57.864265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.864276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.864655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.864666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.865020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.865031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.865385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.865396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.865639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.865649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.865981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.865991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.866252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.866263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.866614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.866624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.866986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.866997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.867383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.867393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.867761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.867772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.868127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.868138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.868496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.868507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.868725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.868735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.869085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.869099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.869464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.869475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.869826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.869836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.870184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.870195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.870553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.870565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.870922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.870932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.871247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.871258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.871595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.871606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.871998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.872009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.872371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.872382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.872749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.872760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.873134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.873145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.873574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.873585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.873936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.873947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.874305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.874316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.874505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.874516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.874875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.874886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.875242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.875253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.875657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.875668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.875889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.875899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.876256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.876269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.876631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.876643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.928 [2024-07-22 20:46:57.876998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.928 [2024-07-22 20:46:57.877008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.928 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.877391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.877403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.877759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.877769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.878125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.878136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.878496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.878508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.878883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.878897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.879251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.879262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.879619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.879632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.879989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.880000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.880373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.880385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.880617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.880627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.880982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.880993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.881393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.881404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.881827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.881838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.882188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.882210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.882579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.882590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.882961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.882973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.883348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.883358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.883623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.883633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.883994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.884005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.884356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.884367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.884746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.884757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.885114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.885124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.885473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.885483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.885844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.885854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.886120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.886131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.886392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.886403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.886776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.886787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.887150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.887160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.887398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.887408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.887795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.887806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.888211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.888222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.888520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.888531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.888886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.888897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.889252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.889263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.889638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.889648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.890004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.890016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.890274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.929 [2024-07-22 20:46:57.890285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.929 qpair failed and we were unable to recover it. 00:39:45.929 [2024-07-22 20:46:57.890648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.890661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.891056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.891067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.891334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.891344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.891711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.891721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.891980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.891991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.892338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.892349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.892664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.892676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.893046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.893057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.893420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.893430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.893789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.893801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.894157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.894168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.894520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.894531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.894885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.894896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.895253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.895264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.895633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.895644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.896021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.896032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.896391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.896401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.896764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.896775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.896998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.897008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.897351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.897362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.897720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.897731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.898089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.898100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.898479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.898490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.898856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.898868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.899226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.899237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.899598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.899609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.900007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.900018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.900290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.900301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.900653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.900663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.901030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.901041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.901301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.901312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.901702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.901716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.902073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.902084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.902468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.902478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.902837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.902847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.903194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.903208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.930 qpair failed and we were unable to recover it. 00:39:45.930 [2024-07-22 20:46:57.903544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.930 [2024-07-22 20:46:57.903556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.903817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.903827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.904047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.904058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.904435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.904446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.904873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.904887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.905234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.905245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.905569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.905579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.905956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.905967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.906323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.906334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.906704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.906715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.907068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.907079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.907460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.907471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.907898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.907909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.908255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.908267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.908465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.908477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.908855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.908867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.909062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.909074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.909391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.909401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.909765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.909776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.910154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.910164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.910514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.910525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.910879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.910890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.911245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.911256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.911604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.911614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.911961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.911971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.912329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.912340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.912730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.912741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.912925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.912935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.913260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.913271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.913531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.913542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.913888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.913899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.914277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.914288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.914479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.914490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.914874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.914886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.915248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.915259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.915621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.915632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.915892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.931 [2024-07-22 20:46:57.915903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.931 qpair failed and we were unable to recover it. 00:39:45.931 [2024-07-22 20:46:57.916330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.932 [2024-07-22 20:46:57.916342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.932 qpair failed and we were unable to recover it. 00:39:45.932 [2024-07-22 20:46:57.916691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.932 [2024-07-22 20:46:57.916702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.932 qpair failed and we were unable to recover it. 00:39:45.932 [2024-07-22 20:46:57.917076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.932 [2024-07-22 20:46:57.917087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:45.932 qpair failed and we were unable to recover it. 00:39:46.205 [2024-07-22 20:46:57.917386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.205 [2024-07-22 20:46:57.917398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.205 qpair failed and we were unable to recover it. 00:39:46.205 [2024-07-22 20:46:57.917589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.205 [2024-07-22 20:46:57.917600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.205 qpair failed and we were unable to recover it. 00:39:46.205 [2024-07-22 20:46:57.917917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.205 [2024-07-22 20:46:57.917928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.205 qpair failed and we were unable to recover it. 00:39:46.205 [2024-07-22 20:46:57.918306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.205 [2024-07-22 20:46:57.918317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.205 qpair failed and we were unable to recover it. 00:39:46.205 [2024-07-22 20:46:57.918516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.918529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.918857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.918868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.919229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.919240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.919617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.919627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.919982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.919994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.920340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.920352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.920748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.920758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.921180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.921191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.921552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.921563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.921921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.921933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.922292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.922303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.922665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.922675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.923035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.923046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.923407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.923418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.923775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.923787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.924094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.924108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.924487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.924498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.924717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.924728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.925087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.925098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.925465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.925476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.925836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.925847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.926206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.926218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.926583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.926593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.926969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.926980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.927333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.927345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.927722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.927732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.928089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.928099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.928290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.928300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.928541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.928552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.928807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.928817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.929170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.929181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.929577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.929588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.929811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.929821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.930183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.930194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.930415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.930427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.930820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.930831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.931208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.206 [2024-07-22 20:46:57.931219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.206 qpair failed and we were unable to recover it. 00:39:46.206 [2024-07-22 20:46:57.931609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.931620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.931974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.931984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.932338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.932348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.932694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.932708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.933064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.933076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.933520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.933531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.933887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.933897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.934239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.934252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.934598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.934608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.934966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.934977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.935295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.935307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.935674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.935685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.936040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.936051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.936398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.936409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.936632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.936642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.937021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.937031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.937325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.937335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.937662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.937673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.938019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.938030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.938405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.938415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.938772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.938783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.939139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.939150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.939515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.939526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.939870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.939881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.940238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.940249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.940628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.940638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.940995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.941005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.941358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.941370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.941743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.941754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.942109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.942120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.942496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.942508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.942890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.942901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.943258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.943269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.943704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.943715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.944063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.944074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.944453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.944464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.207 [2024-07-22 20:46:57.944688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.207 [2024-07-22 20:46:57.944699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.207 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.945055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.945066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.945519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.945530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.945861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.945872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.946228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.946239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.946635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.946646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.947001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.947025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.947370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.947383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.947749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.947760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.948184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.948195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.948557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.948568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.948956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.948966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.949322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.949334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.949672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.949682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.949833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.949844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.950189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.950203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.950552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.950563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.950916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.950927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.951126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.951138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.951477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.951488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.951708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.951718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.952149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.952159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.952516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.952527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.952898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.952910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.953266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.953276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.953466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.953477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.953851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.953862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.954206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.954217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.954392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.954402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.954740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.954750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.955001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.955012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.955363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.955374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.955556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.955567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.955744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.955755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.956087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.956098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.956463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.956473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.956818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.956829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.957185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.208 [2024-07-22 20:46:57.957196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.208 qpair failed and we were unable to recover it. 00:39:46.208 [2024-07-22 20:46:57.957573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.957585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.957969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.957979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.958206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.958216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.958548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.958559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.958910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.958921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.959181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.959191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.959527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.959538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.959894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.959905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.960214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.960225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.960657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.960670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.961026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.961038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.961433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.961443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.961780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.961790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.962174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.962185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.962531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.962543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.962898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.962909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.963259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.963270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.963654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.963665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.964022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.964033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.964398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.964409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.964766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.964777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.965153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.965163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.965508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.965520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.965880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.965890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.966248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.966259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.966614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.966625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.966983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.966994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.967354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.967365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.967727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.967738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.968113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.968123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.968536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.968547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.968772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.968783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.969140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.969154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.969508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.969520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.969880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.969891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.970245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.970256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.970631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.209 [2024-07-22 20:46:57.970642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.209 qpair failed and we were unable to recover it. 00:39:46.209 [2024-07-22 20:46:57.971021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.971032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.971387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.971399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.971710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.971720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.972080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.972092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.972478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.972488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.972846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.972857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.973211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.973223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.973574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.973585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.973961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.973971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.974287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.974298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.974721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.974732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.974938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.974948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.975316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.975331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.975543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.975553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.975923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.975934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.976290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.976302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.976675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.976685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.977036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.977046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.977354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.977366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.977727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.977737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.978114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.978125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.978490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.978501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.978860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.978871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.979232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.979244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.979627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.979638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.980004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.980014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.980388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.980399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.980756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.980767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.981147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.981158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.981525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.981536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.210 qpair failed and we were unable to recover it. 00:39:46.210 [2024-07-22 20:46:57.981888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.210 [2024-07-22 20:46:57.981899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.982249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.982259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.982573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.982584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.982938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.982948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.983210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.983220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.983574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.983585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.983963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.983974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.984364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.984376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.984738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.984749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.985104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.985115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.985456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.985467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.985821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.985832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.986185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.986196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.986571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.986582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.986900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.986911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.987268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.987279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.987635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.987647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.987948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.987958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.988333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.988344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.988702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.988713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.989068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.989078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.989462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.989473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.989880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.989892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.990151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.990162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.990511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.990522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.990873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.990884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.991078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.991089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.991460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.991470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.991826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.991838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.992204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.992219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.992460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.992471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.992690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.992701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.993058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.993069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.993419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.993430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.993770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.993781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.994145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.994155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.994510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.994521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.211 qpair failed and we were unable to recover it. 00:39:46.211 [2024-07-22 20:46:57.994878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.211 [2024-07-22 20:46:57.994890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.995269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.995280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.995641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.995652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.996010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.996021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.996483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.996494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.996684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.996695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.997104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.997115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.997488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.997500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.997855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.997867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.998245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.998257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.998635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.998646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.999073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.999085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.999433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.999444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:57.999823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:57.999834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.000188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.000199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.000457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.000468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.000817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.000827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.001212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.001227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.001502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.001513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.001867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.001877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.002225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.002236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.002597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.002608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.002963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.002974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.003327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.003338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.003707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.003717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.004092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.004104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.004484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.004496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.004841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.004851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.005206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.005217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.005510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.005520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.005875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.005886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.006241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.006251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.006615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.006625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.006853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.006864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.007227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.007238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.007464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.007474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.007830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.007841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.212 qpair failed and we were unable to recover it. 00:39:46.212 [2024-07-22 20:46:58.008222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.212 [2024-07-22 20:46:58.008233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.008590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.008600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.008997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.009008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.009218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.009229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.009428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.009438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.009818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.009829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.010186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.010196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.010561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.010572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.010915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.010927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.011281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.011293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.011566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.011578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.011934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.011945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.012319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.012331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.012680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.012692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.013084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.013094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.013473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.013484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.013857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.013868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.014230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.014242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.014603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.014614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.014970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.014987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.015370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.015381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.015739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.015750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.015969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.015980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.016401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.016412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.016761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.016773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.017134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.017145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.017505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.017516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.017857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.017868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.018211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.018223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.018458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.018468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.018834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.018845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.019203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.019214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.019404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.019413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.019727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.019738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.019985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.019994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.020375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.020386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.020617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.213 [2024-07-22 20:46:58.020627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.213 qpair failed and we were unable to recover it. 00:39:46.213 [2024-07-22 20:46:58.021031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.021041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.021388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.021399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.021613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.021624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.021820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.021830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.022204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.022215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.022436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.022446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.022817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.022827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.023206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.023217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.023585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.023596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.023953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.023964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.024314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.024324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.024569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.024580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.024968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.024978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.025338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.025349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.025545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.025556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.025804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.025816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.026165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.026176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.026527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.026538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.026731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.026743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.027090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.027102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.027459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.027471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.027828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.027839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.028195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.028209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.028542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.028553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.028911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.028922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.029278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.029289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.029665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.029675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.029897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.029908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.030270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.030280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.030645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.030655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.031011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.031022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.031400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.031410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.031764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.031775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.032166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.032177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.032545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.032556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.032935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.032945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.033301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.033312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.214 qpair failed and we were unable to recover it. 00:39:46.214 [2024-07-22 20:46:58.033683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.214 [2024-07-22 20:46:58.033694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.034089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.034100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.034465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.034476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.034828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.034839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.035195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.035209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.035578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.035588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.035928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.035939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.036285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.036296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.036658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.036669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.037025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.037040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.037417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.037429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3902367 Killed "${NVMF_APP[@]}" "$@" 00:39:46.215 [2024-07-22 20:46:58.037861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.037873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.038219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.038230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:39:46.215 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:46.215 [2024-07-22 20:46:58.038703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.038715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:46.215 [2024-07-22 20:46:58.039100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.039111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:46.215 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:46.215 [2024-07-22 20:46:58.039487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.039499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.039853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.039863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.040264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.040275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.040736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.040748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.041105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.041116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.041494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.041506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.041871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.041882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.042264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.042275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.042522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.042532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.042958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.042968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.043316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.043327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.043681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.043694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.043884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.043895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.044223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.044234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.044607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.215 [2024-07-22 20:46:58.044619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.215 qpair failed and we were unable to recover it. 00:39:46.215 [2024-07-22 20:46:58.044999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.045010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.045367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.045378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.045584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.045595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.045893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.045904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.046100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.046110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.046495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.046506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3903398 00:39:46.216 [2024-07-22 20:46:58.046893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.046906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3903398 00:39:46.216 [2024-07-22 20:46:58.047262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.047274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:46.216 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3903398 ']' 00:39:46.216 [2024-07-22 20:46:58.047620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.047633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:46.216 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:46.216 [2024-07-22 20:46:58.047954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.047965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:46.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:46.216 [2024-07-22 20:46:58.048320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:46.216 [2024-07-22 20:46:58.048333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:46.216 [2024-07-22 20:46:58.048706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.048719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.049062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.049074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.049460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.049471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.049820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.049832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.050181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.050192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.050561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.050573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.050827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.050837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.051211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.051222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.051593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.051604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.052014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.052025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.052390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.052401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.052750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.052762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.053117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.053127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.053482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.053493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.053848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.053859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.054223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.054236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.054613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.054625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.055007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.055019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.055370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.055382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.055738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.055750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.056107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.216 [2024-07-22 20:46:58.056118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.216 qpair failed and we were unable to recover it. 00:39:46.216 [2024-07-22 20:46:58.056465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.056477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.056872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.056884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.057135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.057147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.057508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.057520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.057909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.057923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.058291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.058305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.058683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.058695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.059054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.059065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.059440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.059452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.059813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.059825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.060161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.060173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.060548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.060564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.060765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.060776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.061132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.061143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.061490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.061502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.061864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.061876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.062251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.062262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.062637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.062648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.063012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.063024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.063390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.063401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.063779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.063791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.064209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.064221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.064459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.064471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.064819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.064831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.065218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.065229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.065568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.065579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.065954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.065965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.066368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.066379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.066742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.066755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.066952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.066964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.067340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.067351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.067724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.067735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.068081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.068092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.068294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.068305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.068652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.068663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.068835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.068846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.069206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.069217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.217 [2024-07-22 20:46:58.069552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.217 [2024-07-22 20:46:58.069562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.217 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.069776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.069787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.070160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.070171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.070522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.070533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.070885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.070896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.071252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.071263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.071723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.071734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.072110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.072121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.072511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.072525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.072882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.072892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.073242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.073254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.073602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.073614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.073977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.073987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.074343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.074354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.074709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.074720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.075086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.075097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.075448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.075459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.075813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.075824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.076171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.076181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.076645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.076656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.077012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.077023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.077381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.077394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.077755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.077766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.078180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.078190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.078543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.078554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.078907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.078919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.079356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.079367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.079712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.079723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.080079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.080090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.080530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.080541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.080971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.080982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.081210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.081224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.081598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.081609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.081958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.081969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.082429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.082463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.082713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.082726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.083109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.083119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.218 [2024-07-22 20:46:58.083477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.218 [2024-07-22 20:46:58.083491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.218 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.083887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.083901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.084290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.084302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.084486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.084497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.084837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.084848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.085228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.085239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.085607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.085619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.085881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.085892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.086086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.086098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.086425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.086436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.086802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.086812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.087002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.087015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.087253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.087265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.087584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.087595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.087959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.087970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.088216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.088227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.088601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.088612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.088967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.088978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.089348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.089360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.089728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.089739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.090103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.090115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.090475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.090487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.090841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.090851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.091057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.091067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.091415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.091426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.091794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.091805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.092179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.092190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.092556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.092568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.092938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.092950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.093299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.093311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.093677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.093688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.094048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.094060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.094416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.219 [2024-07-22 20:46:58.094428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.219 qpair failed and we were unable to recover it. 00:39:46.219 [2024-07-22 20:46:58.094785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.094796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.095141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.095152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.095581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.095592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.095964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.095975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.096276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.096287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.096624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.096635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.096988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.096999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.097358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.097370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.097740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.097750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.098128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.098139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.098362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.098373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.098765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.098776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.099195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.099210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.099569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.099580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.099871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.099881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.100235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.100247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.100607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.100617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.100868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.100879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.101240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.101253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.101633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.101644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.102003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.102014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.102355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.102367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.102698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.102708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.103019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.103029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.103382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.103393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.103772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.103787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.104135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.104146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.104512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.104523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.104870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.104881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.105256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.105266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.105624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.105635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.105880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.105890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.105986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.105999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.106343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.106354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.106721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.106731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.220 [2024-07-22 20:46:58.107094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.220 [2024-07-22 20:46:58.107105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.220 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.107300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.107310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.107656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.107667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.108016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.108027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.108386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.108397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.108642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.108652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.109010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.109020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.109376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.109387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.109584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.109595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.109949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.109961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.110337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.110348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.110655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.110665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.111019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.111030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.111394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.111404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.111789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.111800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.112162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.112173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.112544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.112555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.112961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.112972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.113216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.113229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.113565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.113576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.113931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.113942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.114322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.114333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.114728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.114739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.115111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.115124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.115485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.115497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.115843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.115855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.116223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.116234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.116596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.116607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.116962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.116973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.117323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.117335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.117693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.117704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.118061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.118071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.118317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.118328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.118678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.118689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.119065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.119076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.119416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.119427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.119781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.119791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.120148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.221 [2024-07-22 20:46:58.120159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.221 qpair failed and we were unable to recover it. 00:39:46.221 [2024-07-22 20:46:58.120513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.120525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.120925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.120936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.121291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.121302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.121572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.121583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.121924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.121935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.122291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.122302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.122680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.122691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.123045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.123055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.123428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.123439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.123638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.123649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.124025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.124036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.124393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.124404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.124780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.124790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.124984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.124995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.125408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.125419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.125767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.125777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.125969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.125980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.126322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.126334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.126705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.126716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.127075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.127087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.127435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.127446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.127676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.127687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.128070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.128082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.128387] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:46.222 [2024-07-22 20:46:58.128431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.128448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.128483] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:46.222 [2024-07-22 20:46:58.128648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.128660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.129017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.129027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.129381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.129392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.129749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.129761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.129976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.129988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.130352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.130364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.130739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.130750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.131106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.131118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.131474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.131485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.131841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.131853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.132076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.132087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.132465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.132477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.132853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.222 [2024-07-22 20:46:58.132864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.222 qpair failed and we were unable to recover it. 00:39:46.222 [2024-07-22 20:46:58.133216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.133230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.133576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.133587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.133946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.133957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.134306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.134317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.134656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.134667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.135016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.135027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.135390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.135401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.135782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.135793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.136149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.136160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.136536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.136548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.136809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.136820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.137170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.137180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.137476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.137488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.137852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.137863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.138227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.138239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.138593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.138604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.138964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.138976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.139339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.139351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.139705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.139717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.140108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.140119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.140477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.140489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.140843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.140855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.141244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.141256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.141615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.141626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.141974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.141986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.142342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.142354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.142711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.142722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.143108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.143121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.143482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.143494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.143808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.143820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.144178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.144190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.144546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.144558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.144935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.144946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.145303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.145314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.145686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.145696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.146054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.146065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.146423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.223 [2024-07-22 20:46:58.146435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.223 qpair failed and we were unable to recover it. 00:39:46.223 [2024-07-22 20:46:58.146792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.146802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.147115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.147127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.147478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.147490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.147845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.147858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.148084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.148095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.148440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.148454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.148796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.148807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.149155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.149166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.149514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.149526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.149875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.149886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.150084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.150097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.150438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.150450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.150808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.150820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.151172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.151187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.151550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.151562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.151920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.151931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.152291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.152302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.152673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.152683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.153090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.153102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.153451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.153463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.153810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.153820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.154268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.154279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.154631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.154642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.155003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.155014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.155371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.155382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.155745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.155756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.156134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.156144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.156501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.156513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.156896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.156907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.157263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.157275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.157671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.157683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.158038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.224 [2024-07-22 20:46:58.158050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.224 qpair failed and we were unable to recover it. 00:39:46.224 [2024-07-22 20:46:58.158405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.158415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.158776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.158787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.159196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.159211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.159432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.159443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.159800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.159811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.160167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.160179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.160591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.160602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.160959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.160969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.161324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.161336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.161692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.161703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.162076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.162088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.162468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.162481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.162840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.162852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.163072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.163083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.163434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.163445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.163891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.163902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.164241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.164253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.164632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.164644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.164993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.165005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.165385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.165396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.165598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.165610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.165934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.165945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.166356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.166367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.166719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.166730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.167068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.167079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.167536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.167547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.167925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.167936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.168292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.168303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.168501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.168512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.168880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.168890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.169276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.169286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.169643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.169654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.170009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.170020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.170378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.170389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.170740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.170751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.171105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.171116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.171488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.225 [2024-07-22 20:46:58.171499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.225 qpair failed and we were unable to recover it. 00:39:46.225 [2024-07-22 20:46:58.171865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.171876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.172260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.172273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.172631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.172642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.173002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.173012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.173220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.173231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.173592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.173603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.173957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.173968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.174229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.174240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.174604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.174618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.174997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.175008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.175368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.175379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.175740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.175751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.176106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.176116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.176473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.176485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.176839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.176851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.177217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.177229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.177610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.177621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.178007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.178017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.178380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.178391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.178750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.178763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.179126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.179136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.179490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.179502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.179859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.179870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.180270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.180281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.180638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.180649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.181027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.181037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.181385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.181396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.181774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.181784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.182143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.182155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.182549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.182560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.182915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.182927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.183282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.183294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.183649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.183660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.184000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.184011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.184366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.184377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.184770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.184781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.184991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.226 [2024-07-22 20:46:58.185001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.226 qpair failed and we were unable to recover it. 00:39:46.226 [2024-07-22 20:46:58.185363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.185374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.185745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.185756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.186110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.186122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.186482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.186494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.186736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.186749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.187124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.187134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.187477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.187488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.187845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.187855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.188232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.188243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.188600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.188610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.188971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.188982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.189341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.189353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.189710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.189722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.190079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.190090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.190523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.190535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.190898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.190910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.191162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.191173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.191540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.191552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.191989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.192000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.192363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.192374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.192739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.192750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.193103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.193113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.193488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.193499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.193864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.193874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.194265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.194276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.194707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.194718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.195073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.195084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.195464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.195488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.195854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.195865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.196221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.196233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.196596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.196607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 EAL: No free 2048 kB hugepages reported on node 1 00:39:46.227 [2024-07-22 20:46:58.196961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.196973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.197347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.197358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.197720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.197731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.197975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.197986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.198346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.198360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.198740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.198751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.227 qpair failed and we were unable to recover it. 00:39:46.227 [2024-07-22 20:46:58.199113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.227 [2024-07-22 20:46:58.199124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.199483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.199493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.199750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.199760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.200106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.200118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.200449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.200461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.200811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.200822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.201207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.201218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.201544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.201554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.201909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.201919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.202273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.202285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.202721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.202732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.202975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.202985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.203342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.203352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.203749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.203760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.204110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.204120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.204437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.204447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.204799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.204810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.205159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.205169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.205527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.205538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.205913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.205923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.206321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.206333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.206722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.206733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.207091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.207102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.207468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.207479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.207749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.207759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.208015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.208026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.208252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.208262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.208463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.208473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.208809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.208820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.209176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.209186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.209451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.209462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.209848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.209859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.210214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.210225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.210588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.210598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.210962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.210972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.211354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.211364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.211678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.228 [2024-07-22 20:46:58.211688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.228 qpair failed and we were unable to recover it. 00:39:46.228 [2024-07-22 20:46:58.211956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.229 [2024-07-22 20:46:58.211966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.229 qpair failed and we were unable to recover it. 00:39:46.229 [2024-07-22 20:46:58.212190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.229 [2024-07-22 20:46:58.212203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.229 qpair failed and we were unable to recover it. 00:39:46.229 [2024-07-22 20:46:58.212576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.229 [2024-07-22 20:46:58.212587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.229 qpair failed and we were unable to recover it. 00:39:46.229 [2024-07-22 20:46:58.212861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.229 [2024-07-22 20:46:58.212872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.229 qpair failed and we were unable to recover it. 00:39:46.229 [2024-07-22 20:46:58.213242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.229 [2024-07-22 20:46:58.213253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.229 qpair failed and we were unable to recover it. 00:39:46.229 [2024-07-22 20:46:58.213632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.229 [2024-07-22 20:46:58.213642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.229 qpair failed and we were unable to recover it. 00:39:46.229 [2024-07-22 20:46:58.214026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.229 [2024-07-22 20:46:58.214037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.229 qpair failed and we were unable to recover it. 00:39:46.229 [2024-07-22 20:46:58.214442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.229 [2024-07-22 20:46:58.214453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.229 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.214805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.214816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.215173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.215183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.215548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.215559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.215874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.215884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.216083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.216093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.216384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.216395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.216751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.216770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.217125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.217136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.217508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.217519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.217872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.217883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.218268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.218279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.218647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.218658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.218830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.218842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.219093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.219104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.219423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.219435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.219792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.219804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.220174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.220184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.502 [2024-07-22 20:46:58.220545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.502 [2024-07-22 20:46:58.220562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.502 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.220939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.220951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.221308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.221319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.221687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.221698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.222060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.222071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.222427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.222437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.222807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.222817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.223180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.223191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.223454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.223465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.223849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.223860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.224226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.224237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.224584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.224596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.224955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.224966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.225308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.225319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.225546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.225557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.225826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.225837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.226207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.226219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.226450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.226460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.226874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.226885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.227242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.227253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.227631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.227645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.228032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.228043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.228409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.228419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.228782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.228793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.229154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.229164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.229577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.229589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.229936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.229946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.230304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.230314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.230664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.230674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.231004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.231014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.231377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.231388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.231846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.231856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.232287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.232298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.503 [2024-07-22 20:46:58.232690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.503 [2024-07-22 20:46:58.232700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.503 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.233061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.233072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.233421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.233431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.233794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.233805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.234227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.234238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.234595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.234607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.234808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.234820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.235227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.235237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.235468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.235479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.235842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.235852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.236211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.236223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.236575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.236585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.236959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.236969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.237325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.237336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.237715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.237726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.238085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.238096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.238499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.238509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.238918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.238928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.239278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.239289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.239666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.239676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.240052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.240063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.240421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.240433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.240702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.240712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.241121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.241132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.241570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.241581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.241916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.241927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.242283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.242294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.242568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.242578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.242918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.242928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.243285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.243296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.243720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.243732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.244083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.244097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.244478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.244490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.244840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.244850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.245208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.245219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.245582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.245593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.245963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.504 [2024-07-22 20:46:58.245974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.504 qpair failed and we were unable to recover it. 00:39:46.504 [2024-07-22 20:46:58.246177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.246189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.246551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.246561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.246924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.246935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.247320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.247330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.247703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.247714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.247896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.247907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.248324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.248335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.248655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.248666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.249022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.249034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.249390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.249401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.249760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.249771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.249965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.249977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.250346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.250356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.250718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.250729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.251087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.251097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.251461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.251471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.251829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.251840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.252042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.252051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.252417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.252428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.252729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.252740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.252911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.252921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.253268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.253278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.253658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.253668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.253985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.253996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.254351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.254361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.254721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.254731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.255084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.255095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.255407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.255418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.255773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.255783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.256141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.256152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.256519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.256530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.256890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.256900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.257249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.257260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.505 [2024-07-22 20:46:58.257628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.505 [2024-07-22 20:46:58.257638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.505 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.257995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.258006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.258228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.258240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.258607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.258618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.258980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.258991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.259350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.259361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.259713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.259723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.260064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.260074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.260419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.260430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.260786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.260798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.261152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.261163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.261412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.261423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.261777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.261787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.262142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.262154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.262494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.262505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.262842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.262853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.263213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.263224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.263592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.263602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.263963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.263973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.264352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.264363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.264729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.264739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.264961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.264972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.265348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.265359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.265584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.265594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.265943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.265953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.266176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.266189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.266426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.266437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.266800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.266811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.267112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.267123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.267377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.267388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.267767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.267777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.267983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.267994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.268355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.268366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.268731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.268742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.269101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.269111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.269381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.269392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.269627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.269637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.506 [2024-07-22 20:46:58.269990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.506 [2024-07-22 20:46:58.270000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.506 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.270350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.270360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.270714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.270726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.271128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.271139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.271492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.271503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.271724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.271735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.272106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.272117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.272474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.272485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.272685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.272695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.273069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.273080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.273457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.273467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.273827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.273837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.274187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.274199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.274586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.274596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.274975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.274986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.275344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.275356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.275706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.275717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.275937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.275947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.275939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:46.507 [2024-07-22 20:46:58.276329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.276341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.276760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.276771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.277121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.277132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.277358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.277369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.277715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.277725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.278087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.278098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.278475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.278487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.278838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.278848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.279194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.279209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.507 [2024-07-22 20:46:58.279556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.507 [2024-07-22 20:46:58.279566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.507 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.279924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.279935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.280168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.280179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.280542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.280554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.280776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.280786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.281144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.281155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.281512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.281523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.281899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.281911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.282346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.282358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.282709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.282720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.283080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.283091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.283435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.283446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.283800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.283811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.284160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.284171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.284559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.284571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.284951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.284961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.285181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.285191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.285574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.285585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.285992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.286002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.286474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.286510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.286897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.286911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.287271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.287283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.287482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.287495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.287861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.287871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.288236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.288247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.288478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.288494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.288857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.288868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.289226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.289238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.289608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.289618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.289992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.290003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.290387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.290401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.290772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.290784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.291146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.291156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.291448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.291458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.291807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.291816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.292153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.292163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.292411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.508 [2024-07-22 20:46:58.292421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.508 qpair failed and we were unable to recover it. 00:39:46.508 [2024-07-22 20:46:58.292786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.292796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.293007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.293017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.293243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.293255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.293630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.293640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.293982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.293991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.294212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.294222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.294584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.294594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.294958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.294967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.295300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.295319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.295683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.295693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.296031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.296040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.296382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.296392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.296762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.296772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.297100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.297109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.297303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.297313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.297512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.297521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.297732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000388680 is same with the state(5) to be set 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Write completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Write completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Write completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Write completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Write completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Write completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Write completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Read completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 Write completed with error (sct=0, sc=8) 00:39:46.509 starting I/O failed 00:39:46.509 [2024-07-22 20:46:58.299028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:39:46.509 [2024-07-22 20:46:58.299597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.299704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038fe80 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.300115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.300165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500038fe80 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.300530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.300563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.300940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.300952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.301429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.301462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.301870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.301882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.302268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.302279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.302661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.302671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.509 [2024-07-22 20:46:58.303053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.509 [2024-07-22 20:46:58.303062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.509 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.303480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.303489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.303858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.303867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.304253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.304263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.304469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.304478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.304820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.304830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.305209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.305219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.305579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.305588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.305963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.305973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.306311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.306322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.306540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.306549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.306954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.306963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.307315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.307325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.307696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.307705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.308033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.308043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.308417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.308426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.308757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.308770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.308981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.308990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.309365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.309375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.309687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.309697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.310084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.310093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.310507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.310517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.310738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.310747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.310959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.310969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.311158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.311168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.311431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.311441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.311799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.510 [2024-07-22 20:46:58.311809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.510 qpair failed and we were unable to recover it. 00:39:46.510 [2024-07-22 20:46:58.312138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.312147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.312518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.312528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.312865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.312874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.313229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.313243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.313578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.313587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.313924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.313933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.317534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.317567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.317748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.317760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.318106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.318116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.318262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.318274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.318651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.318662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.318927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.318937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.319374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.319384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.319588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.319598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.319942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.319951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.320298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.320307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.320524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.320534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.320855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.320865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.321198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.321213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.321460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.321470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.321704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.321714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.322119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.322129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.322467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.322477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.322815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.322825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.323277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.323286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.323655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.323665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.511 [2024-07-22 20:46:58.324013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.511 [2024-07-22 20:46:58.324022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.511 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.324451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.324461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.324786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.324795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.325183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.325194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.325583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.325592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.325957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.325967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.326350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.326360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.326728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.326738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.327010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.327020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.327364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.327375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.327742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.327751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.328014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.328024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.328394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.328404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.328773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.328783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.329144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.329153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.329406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.329416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.329775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.329784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.330050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.330059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.330517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.330526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.330864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.330874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.331134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.331143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.331580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.331590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.331998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.332008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.332481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.332514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.332828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.332839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.333100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.333109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.333465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.512 [2024-07-22 20:46:58.333475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.512 qpair failed and we were unable to recover it. 00:39:46.512 [2024-07-22 20:46:58.333841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.333850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.334063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.334072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.334420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.334430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.334816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.334826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.335329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.335340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.335716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.335726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.336117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.336127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.336337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.336347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.336679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.336688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.337038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.337048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.337406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.337415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.337511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.337519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.337846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.337855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.338230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.338241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.338552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.338566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.338930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.338939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.339279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.339291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.339676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.339685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.340045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.340055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.340408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.513 [2024-07-22 20:46:58.340417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.513 qpair failed and we were unable to recover it. 00:39:46.513 [2024-07-22 20:46:58.340790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.340799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.341153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.341162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.341510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.341519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.341735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.341745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.342126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.342135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.342528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.342538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.342896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.342905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.343285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.343295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.343641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.343650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.344015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.344024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.344387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.344396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.344733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.344743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.345214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.345225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.345452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.345461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.345844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.345854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.346212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.346222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.346532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.346542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.346923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.346932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.347342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.347352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.347644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.347653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.348016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.348025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.348283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.348292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.348668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.348677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.348891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.348900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.349259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.349269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.349615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.349624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.514 qpair failed and we were unable to recover it. 00:39:46.514 [2024-07-22 20:46:58.349989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.514 [2024-07-22 20:46:58.349998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.350356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.350366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.350717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.350727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.351048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.351057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.351398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.351408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.351763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.351772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.352108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.352118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.352294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.352304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.352641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.352650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.352862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.352871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.353218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.353230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.353597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.353607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.353895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.353905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.354174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.354183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.354545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.354554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.354890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.354900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.355286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.355295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.355635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.355645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.355867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.355876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.356103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.356112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.356486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.356496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.356836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.356845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.357170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.357179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.357530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.357539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.357877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.357886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.358250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.358259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.515 [2024-07-22 20:46:58.358656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.515 [2024-07-22 20:46:58.358665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.515 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.359042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.359053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.359326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.359335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.359672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.359681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.360055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.360064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.360402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.360413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.360774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.360787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.361035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.361045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.361417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.361427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.361761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.361770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.362136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.362145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.362503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.362513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.362859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.362868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.363117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.363126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.363351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.363361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.363777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.363786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.363967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.363978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.364307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.364317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.364693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.364702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.365027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.365037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.365360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.365369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.365557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.365568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.365940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.365950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.366338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.366348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.366607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.366618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.367004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.367014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.516 qpair failed and we were unable to recover it. 00:39:46.516 [2024-07-22 20:46:58.367352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.516 [2024-07-22 20:46:58.367363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.367771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.367780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.368127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.368137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.368480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.368490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.368828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.368838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.369234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.369245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.369582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.369591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.369789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.369799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.370145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.370154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.370562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.370571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.370774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.370784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.371197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.371211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.371496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.371506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.371925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.371936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.372277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.372287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.372698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.372708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.373046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.373056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.373421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.373431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.373788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.373798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.374136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.374145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.374510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.374521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.374883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.374893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.375105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.375115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.375499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.375509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.375857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.375866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.376226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.376236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.517 [2024-07-22 20:46:58.376574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.517 [2024-07-22 20:46:58.376583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.517 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.376928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.376937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.377373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.377383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.377575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.377584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.377938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.377947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.378280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.378291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.378514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.378524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.378986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.378995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.379351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.379367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.379764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.379773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.380109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.380119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.380478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.380488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.380847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.380858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.381233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.381242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.381579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.381588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.381922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.381931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.382307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.382317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.382654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.382664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.383023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.383038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.383408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.383418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.383793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.383803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.384026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.384037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.384414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.384424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.384773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.384782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.384968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.384977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.385337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.385347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.385802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.518 [2024-07-22 20:46:58.385811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.518 qpair failed and we were unable to recover it. 00:39:46.518 [2024-07-22 20:46:58.386134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.386144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.386338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.386348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.386738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.386747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.387082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.387091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.387439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.387449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.387652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.387661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.388045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.388054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.388414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.388423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.388782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.388792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.389149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.389158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.389501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.389511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.389869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.389879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.390243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.390253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.390618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.390627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.390967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.390976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.391176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.391186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.391558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.391567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.391900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.391910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.392263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.392273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.392608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.392617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.519 [2024-07-22 20:46:58.392981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.519 [2024-07-22 20:46:58.392990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.519 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.393139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.393148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.393549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.393559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.393914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.393923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.394292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.394302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.394679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.394690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.394875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.394885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.395157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.395166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.395549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.395559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.395919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.395928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.396262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.396273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.396639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.396648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.396990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.396999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.397366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.397376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.397741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.397750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.397943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.397952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.398323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.398332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.398670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.398679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.399042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.399051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.399448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.399458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.399788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.399798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.400157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.400167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.400500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.400510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.400887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.400897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.401260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.401270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.401642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.401651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.402008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.520 [2024-07-22 20:46:58.402017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.520 qpair failed and we were unable to recover it. 00:39:46.520 [2024-07-22 20:46:58.402378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.402388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.402748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.402758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.403136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.403144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.403502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.403511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.403733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.403743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.404082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.404091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.404509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.404520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.404898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.404908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.405090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.405105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.405370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.405380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.405643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.405653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.405995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.406005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.406370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.406380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.406743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.406752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.406998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.407007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.407281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.407291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.407663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.407672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.408044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.408054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.408516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.408526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.408863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.408872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.409111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.409122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.409365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.409375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.409565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.409576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.409844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.409853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.410158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.410168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.410534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.521 [2024-07-22 20:46:58.410544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.521 qpair failed and we were unable to recover it. 00:39:46.521 [2024-07-22 20:46:58.410854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.410863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.411224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.411234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.411561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.411572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.411837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.411847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.412206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.412215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.412590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.412600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.412973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.412982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.413329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.413339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.413702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.413711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.414076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.414086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.414436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.414446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.414802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.414811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.415175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.415184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.415577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.415586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.415923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.415933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.416315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.416325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.416710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.416720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.417120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.417129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.417478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.417488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.417845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.417857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.418004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.418013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.418418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.418427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.418640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.418649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.418994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.419004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.419349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.419359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.419734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.419743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.522 [2024-07-22 20:46:58.419939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.522 [2024-07-22 20:46:58.419948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.522 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.420353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.420362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.420595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.420604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.420969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.420978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.421320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.421330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.421541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.421550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.421746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.421755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.422156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.422165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.422429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.422439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.422666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.422675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.423065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.423077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.423428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.423438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.423690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.423699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.424060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.424070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.424438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.424447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.424881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.424891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.425119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.425129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.425542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.425552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.425621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.425630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.425932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.425942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.426283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.426297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.426666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.426675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.427058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.427067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.427458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.427468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.427830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.427840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.428225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.428234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.523 [2024-07-22 20:46:58.428632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.523 [2024-07-22 20:46:58.428641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.523 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.429014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.429023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.429229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.429239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.429436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.429451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.429619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.429629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.429821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.429830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.430073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.430082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.430512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.430523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.430910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.430920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.431307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.431317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.431688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.431698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.432064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.432073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.432476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.432486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.432852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.432869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.433232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.433241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.433592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.433601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.433781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.433790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.434152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.434161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.434552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.434562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.434786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.434796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.435162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.435172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.435556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.435566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.435899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.435908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.436245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.436255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.436481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.436490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.436863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.436872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.437241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.524 [2024-07-22 20:46:58.437251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.524 qpair failed and we were unable to recover it. 00:39:46.524 [2024-07-22 20:46:58.437620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.437630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.437993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.438002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.438342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.438352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.438727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.438736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.439073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.439082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.439470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.439480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.439693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.439703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.440067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.440078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.440433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.440443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.440780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.440789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.441159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.441168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.441532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.441541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.441729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.441739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.442107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.442117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.442465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.442475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.442853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.442862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.443199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.443219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.443601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.443611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.443803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.443813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.444194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.444207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.444563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.444575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.444909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.444918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.445251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.445261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.445633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.445642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.445948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.445958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.446222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.525 [2024-07-22 20:46:58.446232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.525 qpair failed and we were unable to recover it. 00:39:46.525 [2024-07-22 20:46:58.446565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.446574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.446908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.446918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.447277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.447287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.447651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.447667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.448024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.448034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.448390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.448404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.448759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.448768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.448959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.448969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.449216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.449225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.449491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.449501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.449855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.449864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.450197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.450209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.450452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.450461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.450758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.450767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.451133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.451142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.451568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.451578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.451924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.451933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.452272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.452282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.452742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.452751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.453101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.526 [2024-07-22 20:46:58.453110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.526 qpair failed and we were unable to recover it. 00:39:46.526 [2024-07-22 20:46:58.453500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.453509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.453876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.453886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.454101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.454111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.454497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.454507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.454872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.454882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.455222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.455232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.455601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.455610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.455999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.456009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.456461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.456470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.456807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.456817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.457191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.457204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.457537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.457546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.457882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.457892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.458257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.458267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.458651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.458663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.459000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.459010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.459373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.459383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.459742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.459752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.460122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.460131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.460274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.460283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.460298] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:46.527 [2024-07-22 20:46:58.460336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:46.527 [2024-07-22 20:46:58.460348] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:46.527 [2024-07-22 20:46:58.460358] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:46.527 [2024-07-22 20:46:58.460368] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:46.527 [2024-07-22 20:46:58.460532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.460542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.460578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:39:46.527 [2024-07-22 20:46:58.460849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.460860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.460829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:39:46.527 [2024-07-22 20:46:58.461305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.461254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:39:46.527 [2024-07-22 20:46:58.461318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.527 qpair failed and we were unable to recover it. 00:39:46.527 [2024-07-22 20:46:58.461389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:39:46.527 [2024-07-22 20:46:58.461690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.527 [2024-07-22 20:46:58.461700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.462063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.462075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.462418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.462429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.462689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.462698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.462892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.462901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.463274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.463284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.463640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.463649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.464016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.464026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.464252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.464262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.464538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.464547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.464913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.464923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.465268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.465278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.465670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.465680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.465945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.465955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.466308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.466317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.466751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.466761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.467120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.467132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.467369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.467379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.467768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.467778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.467979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.467988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.468319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.468329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.468632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.468642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.468878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.468888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.469103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.469112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.469458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.469468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.469771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.469781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.528 [2024-07-22 20:46:58.470119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.528 [2024-07-22 20:46:58.470133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.528 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.470488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.470498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.470860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.470870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.471088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.471098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.471280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.471290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.471581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.471590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.471920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.471930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.472297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.472307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.472551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.472560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.472768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.472777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.473113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.473122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.473391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.473401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.473784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.473793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.474175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.474184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.474441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.474450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.474703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.474714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.475067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.475076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.475138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.475148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.475550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.475560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.475793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.475802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.476164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.476174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.476543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.476553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.476901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.476910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.477268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.477277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.477643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.477652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.477998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.478007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.478250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.529 [2024-07-22 20:46:58.478259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.529 qpair failed and we were unable to recover it. 00:39:46.529 [2024-07-22 20:46:58.478723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.478733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.479078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.479087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.479430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.479440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.479803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.479813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.480207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.480216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.480652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.480662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.481025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.481035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.481394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.481403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.481785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.481794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.482194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.482210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.482457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.482466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.482811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.482821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.483082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.483091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.483470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.483480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.483839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.483849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.484209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.484218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.484564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.484573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.484990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.485000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.485343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.485353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.485752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.485762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.485976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.485985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.486359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.486370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.486775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.486784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.487148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.487157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.487524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.487543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.487742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.530 [2024-07-22 20:46:58.487752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.530 qpair failed and we were unable to recover it. 00:39:46.530 [2024-07-22 20:46:58.487952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.487962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.488313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.488322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.488714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.488726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.489117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.489126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.489428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.489438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.489799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.489808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.490203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.490212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.490475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.490485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.490812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.490821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.491210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.491220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.491594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.491604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.491964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.491977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.492216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.492225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.492584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.492593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.492948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.492957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.493363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.493374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.493735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.493744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.494120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.494129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.494510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.494520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.494880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.494889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.495235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.495245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.495586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.495595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.495818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.495828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.496205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.496215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.496349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.496358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.496712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.496721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.531 qpair failed and we were unable to recover it. 00:39:46.531 [2024-07-22 20:46:58.497068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.531 [2024-07-22 20:46:58.497077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.497423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.497433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.497795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.497804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.498018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.498028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.498396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.498406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.498835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.498845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.499015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.499024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.499277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.499287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.499541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.499550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.499941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.499950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.500384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.500393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.500755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.500765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.500968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.500978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.501343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.501353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.501710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.501720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.502067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.502077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.502435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.502447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.502766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.502776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.503148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.503158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.503238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.503258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.503588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.503598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.503935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.503944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.504305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.504315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.504679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.504688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.504983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.504992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.505360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.505370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.532 qpair failed and we were unable to recover it. 00:39:46.532 [2024-07-22 20:46:58.505738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.532 [2024-07-22 20:46:58.505748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.506109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.506118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.506466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.506477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.506680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.506689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.506915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.506925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.507131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.507140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.507512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.507522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.507865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.507876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.508084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.508093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.508326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.508336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.508728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.508737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.508940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.508950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.509028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.509039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.509439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.509449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.509647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.509657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.510049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.510058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.510404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.510414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.533 [2024-07-22 20:46:58.510601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.533 [2024-07-22 20:46:58.510612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.533 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.510835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.510847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.511216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.511228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.511605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.511615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.511962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.511972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.512314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.512325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.512526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.512539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.512851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.512861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.513070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.513079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.513408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.513419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.513865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.513875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.514230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.514241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.514587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.514597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.514851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.514860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.515215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.515226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.515621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.515631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.516015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.516025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.516276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.516286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.516669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.516679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.516930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.516941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.517326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.517337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.517687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.517698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.517907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.517916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.518164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.518174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.518536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.518546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.518891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.518901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.519251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.808 [2024-07-22 20:46:58.519261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.808 qpair failed and we were unable to recover it. 00:39:46.808 [2024-07-22 20:46:58.519525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.519535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.519812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.519822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.520037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.520048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.520432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.520442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.520790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.520800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.521163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.521173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.521403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.521413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.521764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.521775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.522230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.522242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.522554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.522564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.522911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.522921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.523290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.523299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.523645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.523655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.523834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.523846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.524189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.524202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.524462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.524472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.524533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.524543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.524904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.524914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.525276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.525286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.525738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.525748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.526009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.526019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.526215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.526225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.526425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.526435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.526685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.526695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.526967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.526976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.527174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.527184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.527609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.527619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.527808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.527822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.528206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.528216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.528480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.528490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.528908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.528917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.529276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.529286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.529669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.529678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.530045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.530055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.530466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.530475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.530698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.530708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.531125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.531135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.531494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.809 [2024-07-22 20:46:58.531504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.809 qpair failed and we were unable to recover it. 00:39:46.809 [2024-07-22 20:46:58.531888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.531899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.532126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.532136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.532501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.532512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.532892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.532901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.533241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.533252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.533464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.533477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.533843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.533852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.534119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.534128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.534491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.534500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.534868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.534877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.535239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.535248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.535587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.535596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.535954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.535963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.536178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.536188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.536361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.536371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.536552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.536563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.536892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.536901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.537141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.537150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.537509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.537519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.537879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.537889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.538228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.538238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.538611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.538620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.538815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.538824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.539088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.539098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.539332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.539342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.539722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.539731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.540077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.540086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.540444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.540454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.540782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.540792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.541030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.541040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.541289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.541300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.541479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.541488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.541737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.541746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.542081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.542090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.542307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.542316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.542711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.542720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.543059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.543069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.543422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.810 [2024-07-22 20:46:58.543432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.810 qpair failed and we were unable to recover it. 00:39:46.810 [2024-07-22 20:46:58.543775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.543784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.544181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.544190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.544445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.544455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.544698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.544708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.545108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.545119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.545437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.545446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.545633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.545643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.546034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.546044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.546294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.546305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.546678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.546687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.546891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.546900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.547278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.547288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.547626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.547635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.547998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.548008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.548400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.548410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.548790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.548799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.549127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.549136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.549420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.549431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.549659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.549668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.550034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.550043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.550413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.550426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.550795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.550804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.551174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.551184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.551554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.551563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.551914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.551923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.552255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.552265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.552635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.552644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.552911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.552921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.553324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.553333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.553706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.553716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.554125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.554135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.554341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.554355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.554779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.554788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.554986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.554995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.555244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.555253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.555630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.555640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.555872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.555882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.556116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.811 [2024-07-22 20:46:58.556126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.811 qpair failed and we were unable to recover it. 00:39:46.811 [2024-07-22 20:46:58.556499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.556509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.556702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.556713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.556973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.556983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.557438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.557448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.557800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.557809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.558022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.558032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.558430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.558440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.558789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.558798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.559255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.559265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.559569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.559578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.559966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.559975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.560273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.560282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.560535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.560545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.560906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.560916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.561279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.561288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.561482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.561491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.561870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.561879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.562304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.562313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.562499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.562509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.562840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.562852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.563193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.563206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.563540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.563549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.563867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.563876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.564225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.564234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.564654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.564664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.565033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.565043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.565401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.565410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.565772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.565782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.566004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.566013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.566341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.566351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.812 qpair failed and we were unable to recover it. 00:39:46.812 [2024-07-22 20:46:58.566844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.812 [2024-07-22 20:46:58.566853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.566930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.566939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.567292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.567302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.567673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.567682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.568034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.568043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.568297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.568307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.568713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.568722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.568921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.568930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.569286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.569296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.569665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.569675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.570040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.570049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.570387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.570398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.570625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.570634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.570862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.570871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.571272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.571282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.571620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.571629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.571994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.572003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.572217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.572227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.572582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.572591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.572854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.572863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.573235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.573244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.573630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.573639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.573839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.573849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.574197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.574210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.574637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.574647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.574992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.575001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.575255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.575265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.575548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.575562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.575935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.575944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.576316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.576328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.576729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.576739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.577100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.577109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.577326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.577335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.577717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.577726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.578070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.578079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.578417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.578427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.578811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.578820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.579171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.579181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.813 qpair failed and we were unable to recover it. 00:39:46.813 [2024-07-22 20:46:58.579252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.813 [2024-07-22 20:46:58.579261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.579618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.579628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.580031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.580040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.580242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.580252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.580645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.580654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.580993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.581004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.581094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.581105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.581475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.581485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.581848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.581858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.582245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.582254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.582432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.582441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.582721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.582731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.583099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.583108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.583474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.583484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.583699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.583708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.583972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.583983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.584347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.584356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.584711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.584721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.585103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.585114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.585473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.585483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.585821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.585831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.586294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.586304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.586644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.586653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.586962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.586972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.587303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.587314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.587518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.587527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.587901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.587910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.588280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.588290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.588495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.588504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.588882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.588892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.589239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.589249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.589452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.589463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.589820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.589830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.590058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.590068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.590332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.590342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.590728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.590738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.590963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.590972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.591254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.591265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.814 qpair failed and we were unable to recover it. 00:39:46.814 [2024-07-22 20:46:58.591647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.814 [2024-07-22 20:46:58.591656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.592093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.592102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.592506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.592516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.592750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.592759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.593153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.593162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.593537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.593547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.593744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.593755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.594154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.594164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.594355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.594365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.594559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.594569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.594948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.594958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.595150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.595160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.595522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.595532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.595747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.595757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.596182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.596193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.596560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.596574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.596825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.596834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.597062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.597072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.597318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.597328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.597493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.597503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.597880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.597889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.598089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.598099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.598271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.598281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.598677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.598687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.598903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.598912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.599255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.599264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.599631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.599641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.600002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.600012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.600157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.600166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.600230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.600240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.600591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.600601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.600964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.600974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.601192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.601204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.601493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.601504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.601852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.601861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.602027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.602036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.602462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.602472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.602687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.602697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.815 qpair failed and we were unable to recover it. 00:39:46.815 [2024-07-22 20:46:58.602967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.815 [2024-07-22 20:46:58.602976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.603205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.603214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.603461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.603471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.603855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.603864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.604214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.604224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.604572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.604581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.604949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.604958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.605342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.605352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.605681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.605690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.606052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.606062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.606464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.606473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.606813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.606822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.607184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.607194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.607562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.607572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.607811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.607820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.608041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.608051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.608397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.608406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.608750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.608759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.609124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.609133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.609490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.609501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.609865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.609874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.609943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.609951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.610332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.610342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.610730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.610740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.611104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.611113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.611406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.611415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.611487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.611496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.611813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.611822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.612045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.612054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.612274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.612284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.612674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.612683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.613039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.613049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.613264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.613274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.613621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.613631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.613956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.613965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.614360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.614372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.614701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.614711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.615073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.615083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.816 qpair failed and we were unable to recover it. 00:39:46.816 [2024-07-22 20:46:58.615327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.816 [2024-07-22 20:46:58.615336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.615556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.615566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.616013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.616022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.616219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.616233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.616632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.616641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.616985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.616995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.617221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.617232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.617592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.617602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.617938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.617948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.618313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.618323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.618559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.618568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.618937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.618946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.619298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.619307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.619677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.619686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.619918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.619928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.620300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.620309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.620530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.620540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.620913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.620922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.621380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.621389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.621779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.621789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.622015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.622024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.622372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.622382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.622488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.622498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.622831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.622841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.623178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.623187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.623504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.623514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.623876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.623885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.624231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.624242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.624640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.624650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.625021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.625030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.625406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.625416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.625797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.625806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.817 qpair failed and we were unable to recover it. 00:39:46.817 [2024-07-22 20:46:58.626230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.817 [2024-07-22 20:46:58.626240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.626452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.626461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.626650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.626661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.626981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.626990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.627327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.627337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.627700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.627711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.628101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.628110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.628175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.628184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.628590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.628599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.628982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.628991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.629357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.629367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.629730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.629740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.629942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.629952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.630325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.630336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.630723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.630733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.630966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.630975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.631343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.631352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.631710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.631720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.632108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.632118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.632452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.632462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.632655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.632665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.632951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.632961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.633133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.633143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.633252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.633261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.633632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.633642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.633994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.634004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.634180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.634190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.634554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.634565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.634758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.634768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.635144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.635154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.635357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.635369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.635697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.635708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.635774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.635784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.636169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.636179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.636526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.636537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.636784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.636812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.637035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.637045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.637293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.818 [2024-07-22 20:46:58.637303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.818 qpair failed and we were unable to recover it. 00:39:46.818 [2024-07-22 20:46:58.637544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.637553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.638011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.638020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.638360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.638370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.638747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.638757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.639192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.639204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.639571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.639580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.639943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.639952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.640364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.640375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.640641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.640650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.641017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.641026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.641368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.641378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.641740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.641749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.642086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.642096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.642464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.642474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.642815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.642824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.643177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.643187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.643547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.643557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.643766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.643775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.644162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.644171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.644616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.644625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.644962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.644972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.645360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.645370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.645598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.645607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.645858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.645868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.646106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.646115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.646517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.646527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.646870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.646879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.647242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.647252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.647470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.647479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.647855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.647864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.648232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.648241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.648492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.648502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.648853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.648863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.649277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.649287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.649642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.649651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.650050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.650060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.650443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.650452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.650750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.819 [2024-07-22 20:46:58.650760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.819 qpair failed and we were unable to recover it. 00:39:46.819 [2024-07-22 20:46:58.651123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.651133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.651556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.651565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.651915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.651924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.652130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.652140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.652396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.652406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.652822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.652832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.653033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.653043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.653409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.653418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.653769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.653779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.654147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.654159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.654546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.654555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.654810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.654820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.655166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.655176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.655568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.655578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.655749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.655758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.656017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.656027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.656261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.656270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.656627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.656636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.656836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.656847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.657183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.657193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.657578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.657588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.658002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.658011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.658195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.658209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.658419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.658433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.658800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.658809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.658992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.659002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.659307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.659317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.659680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.659689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.660041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.660051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.660424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.660433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.660633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.660643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.660881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.660890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.661265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.661275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.661658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.661667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.661878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.661887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.662128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.662137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.662399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.662408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.662799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.820 [2024-07-22 20:46:58.662808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.820 qpair failed and we were unable to recover it. 00:39:46.820 [2024-07-22 20:46:58.663066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.663075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.663297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.663306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.663492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.663501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.663934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.663943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.664212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.664222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.664421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.664430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.664825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.664835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.665239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.665248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.665435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.665444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.665794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.665803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.666173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.666182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.666534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.666547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.666906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.666915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.667295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.667304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.667695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.667705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.667953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.667962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.668336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.668345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.668720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.668730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.669123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.669132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.669497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.669507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.669735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.669744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.670090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.670099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.670303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.670312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.670751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.670760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.671138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.671147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.671387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.671397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.671769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.671778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.672004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.672014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.672290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.672299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.672642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.672651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.673039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.673049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.673421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.673430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.673780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.673789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.674150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.674160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.674524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.674534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.674884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.674893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.675128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.675138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.675503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.821 [2024-07-22 20:46:58.675513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.821 qpair failed and we were unable to recover it. 00:39:46.821 [2024-07-22 20:46:58.675739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.675749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.676115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.676124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.676337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.676346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.676711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.676720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.677102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.677112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.677475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.677485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.677851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.677861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.678251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.678261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.678515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.678525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.678926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.678935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.679171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.679180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.679538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.679550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.679912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.679922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.680140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.680150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.680365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.680377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.680771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.680782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.681144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.681155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.681397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.681408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.681811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.681821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.682153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.682164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.682506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.682518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.682800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.682810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.683205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.683216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.683400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.683410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.683600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.683611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.683986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.683996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.684370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.684381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.684590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.684601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.684870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.684881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.685267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.685278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.685654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.685664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.685891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.685901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.822 qpair failed and we were unable to recover it. 00:39:46.822 [2024-07-22 20:46:58.686289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.822 [2024-07-22 20:46:58.686300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.686724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.686734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.687050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.687061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.687209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.687220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.687407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.687418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.687836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.687846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.688226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.688236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.688610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.688620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.688973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.688986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.689260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.689271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.689651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.689662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.690007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.690018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.690377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.690388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.690740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.690751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.691114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.691125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.691481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.691492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.691753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.691763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.692167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.692177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.692533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.692544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.692795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.692805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.693163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.693173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.693603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.693614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.693840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.693850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.694222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.694233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.694589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.694599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.694951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.694962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.695328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.695339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.695768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.695779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.696002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.696012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.696326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.696337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.696581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.696590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.697020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.697031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.697395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.697405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.697789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.697800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.698165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.698176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.698382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.698394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.698723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.698733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.698959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.698970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.823 qpair failed and we were unable to recover it. 00:39:46.823 [2024-07-22 20:46:58.699402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.823 [2024-07-22 20:46:58.699413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.699790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.699801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.700164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.700174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.700391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.700402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.700795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.700807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.701214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.701229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.701576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.701587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.701772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.701783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.701998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.702008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.702374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.702385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.702740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.702753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.702849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.702859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.703227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.703241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.703637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.703648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.704012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.704022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.704248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.704259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.704526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.704540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.704897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.704907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.705269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.705281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.705673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.705684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.706045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.706056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.706423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.706434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.706823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.706834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.707045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.707056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.707253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.707264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.707670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.707680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.707942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.707952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.708312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.708323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.708574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.708584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.708955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.708966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.709179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.709189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.709544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.709556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.709878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.709888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.710087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.710098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.710341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.710351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.710728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.710739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.710936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.710946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.711180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.711191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.824 qpair failed and we were unable to recover it. 00:39:46.824 [2024-07-22 20:46:58.711554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.824 [2024-07-22 20:46:58.711564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.711776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.711786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.712156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.712167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.712390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.712401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.712768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.712779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.712977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.712987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.713361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.713372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.713721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.713733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.714062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.714074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.714454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.714465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.714682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.714692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.715071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.715082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.715448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.715461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.715820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.715831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.716221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.716232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.716614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.716625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.716985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.716996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.717331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.717343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.717704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.717714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.718074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.718085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.718432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.718443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.718613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.718624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.718976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.718987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.719337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.719348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.719718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.719729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.720084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.720094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.720341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.720352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.720709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.720721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.721081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.721091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.721318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.721329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.721785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.721796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.722176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.722194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.722416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.722427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.722785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.722796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.723154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.723165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.723550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.723560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.723944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.723955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.724323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.724334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.825 qpair failed and we were unable to recover it. 00:39:46.825 [2024-07-22 20:46:58.724696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.825 [2024-07-22 20:46:58.724706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.724906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.724918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.725293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.725304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.725671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.725681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.725898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.725909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.726251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.726263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.726620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.726632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.727043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.727054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.727276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.727287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.727643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.727653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.728039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.728051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.728463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.728474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.728838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.728848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.729194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.729209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.729663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.729675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.730029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.730040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.730506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.730545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.730921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.730935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.731405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.731440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.731714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.731727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.732096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.732107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.732478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.732490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.732928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.732939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.733294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.733306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.733684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.733694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.734073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.734084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.734539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.734550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.734901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.734912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.735300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.735312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.735684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.735695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.736067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.736078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.736434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.736445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.826 qpair failed and we were unable to recover it. 00:39:46.826 [2024-07-22 20:46:58.736664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.826 [2024-07-22 20:46:58.736674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.737048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.737059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.737483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.737495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.737849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.737861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.738222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.738235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.738445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.738457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.738642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.738653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.739023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.739033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.739406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.739417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.739644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.739654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.739838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.739849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.740221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.740233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.740624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.740635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.741000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.741011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.741376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.741388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.741586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.741597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.741933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.741943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.742325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.742337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.742573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.742585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.742945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.742957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.743343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.743354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.743526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.743537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.743914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.743927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.744138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.744148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.744351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.744363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.744683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.744698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.744898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.744908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.745109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.745120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.745361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.745371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.745756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.745766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.746124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.746134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.746496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.746508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.746870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.746881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.747229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.747240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.747440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.747450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.747648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.747658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.748031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.748041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.748384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.748396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.827 qpair failed and we were unable to recover it. 00:39:46.827 [2024-07-22 20:46:58.748604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.827 [2024-07-22 20:46:58.748615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.748820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.748830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.749191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.749207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.749591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.749602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.749962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.749973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.750182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.750193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.750569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.750580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.750801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.750811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.751170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.751180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.751545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.751556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.751924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.751935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.752297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.752307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.752665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.752676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.752902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.752912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.753114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.753124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.753494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.753505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.753863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.753874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.754218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.754230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.754585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.754595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.754807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.754818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.754994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.755003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.755385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.755396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.755607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.755618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.755808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.755820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.756033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.756045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.756474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.756484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.756707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.756717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.757105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.757116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.757495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.757506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.757753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.757763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.758012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.758022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.758438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.758448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.758809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.758820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.759170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.759181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.759554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.759565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.759919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.759930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.760142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.760153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.760326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.828 [2024-07-22 20:46:58.760337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.828 qpair failed and we were unable to recover it. 00:39:46.828 [2024-07-22 20:46:58.760577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.760591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.760957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.760968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.761328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.761338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.761698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.761709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.761906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.761916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.762299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.762310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.762498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.762507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.762850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.762860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.763223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.763234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.763513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.763524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.763912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.763922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.764336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.764347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.764700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.764712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.765102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.765117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.765486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.765497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.765848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.765859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.766242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.766254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.766510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.766520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.766880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.766891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.767115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.767125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.767475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.767485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.767845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.767856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.768063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.768073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.768437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.768448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.768793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.768804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.769026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.769037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.769260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.769272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.769642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.769652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.769997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.770008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.770282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.770292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.770718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.770729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.771108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.771119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.771565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.771576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.771937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.771948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.772296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.772307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.772676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.772688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.773056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.773067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.773449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.829 [2024-07-22 20:46:58.773460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.829 qpair failed and we were unable to recover it. 00:39:46.829 [2024-07-22 20:46:58.773812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.773824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.774047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.774057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.774442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.774454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.774820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.774831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.775052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.775062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.775419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.775429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.775630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.775640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.775996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.776007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.776389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.776399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.776756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.776767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.777127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.777137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.777495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.777506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.777798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.777809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.778171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.778182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.778563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.778574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.778932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.778943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.779156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.779167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.779398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.779409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.779763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.779775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.779998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.780009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.780394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.780405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.780810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.780821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.781184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.781195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.781574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.781585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.781945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.781956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.782316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.782327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.782719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.782730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.783079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.783090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.783468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.783481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.783862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.783873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.784193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.784206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.784437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.784448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.784672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.784682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.785102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.785112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.785334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.785344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.785659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.785669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.786029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.786040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.786399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.830 [2024-07-22 20:46:58.786411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.830 qpair failed and we were unable to recover it. 00:39:46.830 [2024-07-22 20:46:58.786612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.786623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.786956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.786971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.787194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.787209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.787545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.787556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.787898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.787909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.788278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.788290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.788696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.788707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.789063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.789074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.789273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.789284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.789672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.789682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.789884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.789894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.790228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.790239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.790546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.790557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.790920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.790930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.791305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.791316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.791711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.791722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.792080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.792091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.792457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.792469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.792818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.792828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.793223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.793234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.793459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.793470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.793658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.793669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.794032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.794044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.794429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.794440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.794826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.794837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.795199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.795213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.795438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.795448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.795830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.795840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.796197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.796215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.796522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.796534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.796903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.796915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.797185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.797196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.831 [2024-07-22 20:46:58.797575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.831 [2024-07-22 20:46:58.797587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.831 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.797970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.797981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.798185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.798195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.798455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.798466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.798931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.798943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.799442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.799477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.799836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.799849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.799920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.799929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.800254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.800265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.800451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.800461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.800683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.800695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.800849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.800859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.801101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.801114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.801512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.801524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.801918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.801931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.802163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.802175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.802540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.802553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.802903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.802915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.803111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.803122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.803532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.803544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.803898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.803913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.804303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.804315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.804685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.804696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.804907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.804918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.805242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.805253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.805616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.805627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.806012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.806024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.806468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.806479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.806681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.806691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.807030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.807041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.807266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.807277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.807601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.807611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.807837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.807848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.808161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.808177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.808540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.808551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.808858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.808869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.809246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.809257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.832 qpair failed and we were unable to recover it. 00:39:46.832 [2024-07-22 20:46:58.809627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.832 [2024-07-22 20:46:58.809638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.809869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.809882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.810266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.810277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.810676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.810689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.810913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.810925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.811287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.811298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.811561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.811572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.811957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.811968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.812182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.812192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.812441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.812452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.812673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.812684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.812865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.812875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.813091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.813102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.813505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.813516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.813876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.813887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.814100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.814110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.814463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.814474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.814836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.814847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.815070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.815080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.815520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.815532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:46.833 [2024-07-22 20:46:58.815732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:46.833 [2024-07-22 20:46:58.815744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:46.833 qpair failed and we were unable to recover it. 00:39:47.102 [2024-07-22 20:46:58.816079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.816091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.816290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.816303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.816651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.816663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.816858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.816869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.817208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.817220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.817591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.817602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.817854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.817865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.818250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.818262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.818608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.818618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.818979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.818991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.819377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.819387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.819781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.819792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.820169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.820179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.820407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.820418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.820788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.820800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.821048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.821059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.821265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.821276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.821619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.821630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.821857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.821868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.822092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.822104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.822305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.822318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.822540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.822551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.822750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.822760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.823134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.823145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.823333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.823344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.823596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.823607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.103 [2024-07-22 20:46:58.823991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.103 [2024-07-22 20:46:58.824001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.103 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.824226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.824236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.824576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.824587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.824966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.824978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.825343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.825354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.825729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.825740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.826100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.826111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.826479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.826490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.826887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.826898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.827249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.827260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.827497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.827508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.827713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.827723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.828095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.828106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.828334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.828349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.828737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.828748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.829108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.829119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.829362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.829372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.829759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.829770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.829839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.829848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.830163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.830174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.830525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.830536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.830921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.830933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.831001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.831011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.831408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.831421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.831850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.831861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.832245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.832256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.832621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.104 [2024-07-22 20:46:58.832633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.104 qpair failed and we were unable to recover it. 00:39:47.104 [2024-07-22 20:46:58.832994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.833005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.833230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.833241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.833416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.833427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.833800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.833810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.834131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.834141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.834347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.834358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.834672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.834682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.835066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.835079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.835421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.835433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.835790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.835801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.836154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.836165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.836336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.836346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.836741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.836751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.837108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.837120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.837432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.837443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.837804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.837814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.838046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.838056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.838422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.838434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.838709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.838720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.838945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.838955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.839209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.839221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.839580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.839590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.839853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.839863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.840212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.840224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.840535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.840546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.840895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.840906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.105 [2024-07-22 20:46:58.841172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.105 [2024-07-22 20:46:58.841183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.105 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.841385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.841397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.841706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.841716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.841907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.841916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.842110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.842121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.842294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.842305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.842685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.842697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.843103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.843114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.843533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.843545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.843894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.843906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.844248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.844262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.844619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.844631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.844845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.844855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.845181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.845191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.845552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.845563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.845950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.845961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.846327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.846338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.846727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.846738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.847124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.847135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.847502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.847514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.847872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.847883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.848273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.848284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.848727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.848738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.849103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.849115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.849341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.849355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.849560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.849570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.106 qpair failed and we were unable to recover it. 00:39:47.106 [2024-07-22 20:46:58.849904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.106 [2024-07-22 20:46:58.849915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.850300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.850310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.850677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.850689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.851047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.851058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.851426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.851436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.851795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.851806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.852168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.852179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.852585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.852597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.852802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.852813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.852880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.852890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.853118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.853129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.853322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.853336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.853400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.853410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.853588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.853599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.854020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.854032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.854114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.854124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.854327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.854338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.854408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.854417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.854694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.854705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.854904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.854916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.855245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.855256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.855526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.855536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.855858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.855872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.856237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.856248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.856610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.856621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.856976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.856986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.107 [2024-07-22 20:46:58.857377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.107 [2024-07-22 20:46:58.857388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.107 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.857629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.857639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.858003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.858014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.858374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.858385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.858760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.858771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.859135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.859145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.859540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.859551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.859958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.859968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.860171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.860181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.860375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.860386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.860576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.860586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.860800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.860810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.861207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.861218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.861422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.861433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.861797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.861809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.862192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.862206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.862422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.862433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.862613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.862623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.862977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.862987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.863401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.863412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.863776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.863787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.864111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.864121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.864485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.864495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.864696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.864706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.865044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.865054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.865418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.865430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.865756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.108 [2024-07-22 20:46:58.865767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.108 qpair failed and we were unable to recover it. 00:39:47.108 [2024-07-22 20:46:58.866170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.866181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 [2024-07-22 20:46:58.866397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.866407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 [2024-07-22 20:46:58.866787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.866797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 [2024-07-22 20:46:58.867057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.867068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 [2024-07-22 20:46:58.867247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.867259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 [2024-07-22 20:46:58.867669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.867680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 [2024-07-22 20:46:58.868068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.868080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 [2024-07-22 20:46:58.868471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.868481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:47.109 [2024-07-22 20:46:58.868870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.868883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:39:47.109 [2024-07-22 20:46:58.869237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.869253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:47.109 [2024-07-22 20:46:58.869489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.869500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:47.109 [2024-07-22 20:46:58.869722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.869734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:47.109 [2024-07-22 20:46:58.870068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.870079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 [2024-07-22 20:46:58.870424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.870436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 [2024-07-22 20:46:58.870798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.870809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 [2024-07-22 20:46:58.871198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.871212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.109 [2024-07-22 20:46:58.871561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.109 [2024-07-22 20:46:58.871572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.109 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.871933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.871944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.872084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.872094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.872442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.872453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.872674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.872687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.873020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.873031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.873437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.873449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.873814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.873827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.874030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.874041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.874315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.874327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.874671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.874682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.875036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.875047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.875376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.875388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.875470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.875481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.875761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.875771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.876147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.876157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.876370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.876381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.876586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.876597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.876869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.876880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.877322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.877334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.877672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.877683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.878067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.878078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.878273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.878285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.878462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.878473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.878829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.878841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.879043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.879053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.879318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.879329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.110 qpair failed and we were unable to recover it. 00:39:47.110 [2024-07-22 20:46:58.879541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.110 [2024-07-22 20:46:58.879553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.879919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.879929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.880281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.880292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.880507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.880518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.880691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.880702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.881096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.881107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.881304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.881315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.881379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.881389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.881755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.881767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.882109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.882120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.882317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.882328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.882657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.882667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.883029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.883040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.883401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.883411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.883607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.883618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.883897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.883908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.884179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.884208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.884415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.884427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.884763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.884775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.885024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.885034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.885383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.885395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.885766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.885778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.886141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.886153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.886507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.886519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.886744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.886754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.887119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.887130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.887327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.111 [2024-07-22 20:46:58.887338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.111 qpair failed and we were unable to recover it. 00:39:47.111 [2024-07-22 20:46:58.887511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.887521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.887886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.887897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.888090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.888102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.888292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.888308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.888534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.888544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.888928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.888940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.889164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.889176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.889374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.889386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.889577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.889589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.889914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.889925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.890275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.890287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.890467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.890479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.890818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.890829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.891212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.891224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.891619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.891630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.891998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.892011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.892406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.892418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.892809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.892822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.893190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.893210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.893574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.893585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.893811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.893822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.894207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.894219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.894587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.894599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.894996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.895009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.895369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.895382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.895631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.895642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.112 qpair failed and we were unable to recover it. 00:39:47.112 [2024-07-22 20:46:58.895993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.112 [2024-07-22 20:46:58.896004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.896375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.896386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.896748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.896759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.896973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.896984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.897307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.897318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.897679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.897691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.898076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.898086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.898306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.898317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.898662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.898673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.899059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.899071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.899298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.899309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.899694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.899705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.900095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.900106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.900317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.900327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.900502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.900513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.900866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.900877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.901090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.901099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.901491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.901504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.901571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.901580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.901759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.901770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.902096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.902106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.902464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.902475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.902721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.902731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.902962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.902973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.903339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.903350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.903715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.903726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.904086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.113 [2024-07-22 20:46:58.904097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.113 qpair failed and we were unable to recover it. 00:39:47.113 [2024-07-22 20:46:58.904491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.904501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.904866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.904877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.905248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.905259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.905586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.905597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.905957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.905969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.906178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.906188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.906543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.906554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:47.114 [2024-07-22 20:46:58.906782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.906793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:47.114 [2024-07-22 20:46:58.907161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.907173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.114 [2024-07-22 20:46:58.907398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.907410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:47.114 [2024-07-22 20:46:58.907784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.907796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.908157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.908168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.908548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.908560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.908753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.908765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.909014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.909032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.909420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.909431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.909638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.909648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.909964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.909975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.910324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.910336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.910712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.910723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.910935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.910945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.911293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.911303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.911663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.911674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.911872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.911882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.114 [2024-07-22 20:46:58.912219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.114 [2024-07-22 20:46:58.912230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.114 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.912595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.912606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.912970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.912981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.913354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.913367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.913570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.913581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.913841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.913851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.914159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.914170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.914522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.914533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.914899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.914910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.915300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.915311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.915509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.915520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.915752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.915762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.915992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.916002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.916328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.916339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.916552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.916563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.916935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.916946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.917320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.917331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.917765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.917776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.918125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.918138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.918504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.918514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.918875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.918886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.919306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.919327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.919726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.919736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.919957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.919968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.920349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.920360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.920790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.920800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.921002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.921012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.115 qpair failed and we were unable to recover it. 00:39:47.115 [2024-07-22 20:46:58.921292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.115 [2024-07-22 20:46:58.921304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.921739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.921750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.922136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.922147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.922509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.922520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.922882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.922893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.923117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.923129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.923487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.923498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.923677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.923687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.924035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.924046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.924441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.924452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.924677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.924687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.925070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.925081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.925448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.925460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.925689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.925699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.926080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.926091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.926479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.926490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.926695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.926706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.927080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.927090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.927458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.927470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.927715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.927726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.116 qpair failed and we were unable to recover it. 00:39:47.116 [2024-07-22 20:46:58.927922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.116 [2024-07-22 20:46:58.927933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.928288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.928298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.928512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.928522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.928687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.928698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.928950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.928960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.929207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.929217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.929367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.929379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.929618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.929634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.929985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.929997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.930397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.930408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.930815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.930826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.931018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.931030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.931240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.931251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.931556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.931567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.931975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.931985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.932356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.932367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.932803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.932814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.933175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.933186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.933556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.933567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.933956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.933966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.934325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.934336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.934676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.934687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.935034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.935044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.935405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.935415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.935644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.935654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.936042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.936052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.936281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.117 [2024-07-22 20:46:58.936292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.117 qpair failed and we were unable to recover it. 00:39:47.117 [2024-07-22 20:46:58.936626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.936636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.937017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.937028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.937250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.937261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.937621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.937633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.937977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.937988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.938361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.938373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.938576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.938587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.938823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.938834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.939033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.939043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.939242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.939254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.939639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.939649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.940009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.940020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.940238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.940248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.940624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.940634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.940829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.940839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.941167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.941177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.941563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.941574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.941838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.941848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.942219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.942229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.942409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.942419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.942758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.942770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.942998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.943010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.943377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.943388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.943750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.943761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.944082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.944095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.944387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.944397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.118 [2024-07-22 20:46:58.944754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.118 [2024-07-22 20:46:58.944765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.118 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.945125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.945135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.945360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.945370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.945747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.945758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.946120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.946131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.946331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.946343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.946738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.946748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.946981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.946991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.947375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.947386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.947750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.947761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.948103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.948114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.948546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.948557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.948917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.948928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.949133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.949143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.949486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.949497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.949857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.949867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.950197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.950214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.950573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.950583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.950969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.950985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.951437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.951448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.951660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.951670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.951943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.951955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.952324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.952334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.952712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.952724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.953088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.953099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.953488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.953499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.953890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.119 [2024-07-22 20:46:58.953900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.119 qpair failed and we were unable to recover it. 00:39:47.119 [2024-07-22 20:46:58.954266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.954277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.954516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.954526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.954874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.954885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.955111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.955122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.955486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.955497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.955881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.955892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.956327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.956338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.956694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.956706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.957087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.957097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.957482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.957493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.957853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.957864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.958066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.958079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.958143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.958154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.958427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.958438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.958784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.958795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.959101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.959113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.959494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.959505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.959870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.959882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.960099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.960110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.960280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.960292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.960540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.960551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.960764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.960775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.961150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.961161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.961498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.961510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.961707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.961718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.961915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.961926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.120 [2024-07-22 20:46:58.962272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.120 [2024-07-22 20:46:58.962283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.120 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.962657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.962668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 Malloc0 00:39:47.121 [2024-07-22 20:46:58.963045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.963056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.963283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.963293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.963602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.963613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.121 [2024-07-22 20:46:58.963836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.963847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:47.121 [2024-07-22 20:46:58.964229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.964240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.121 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:47.121 [2024-07-22 20:46:58.964741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.964752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.964973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.964984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.965192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.965214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.965552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.965564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.965924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.965936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.966295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.966306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.966695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.966705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.967068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.967079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.967296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.967307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.967591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.967601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.967959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.967969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.968317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.968336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.968668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.968678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.968759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.968768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.969093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.969105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.969170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.969180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.969438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.969449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.969832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.121 [2024-07-22 20:46:58.969842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.121 qpair failed and we were unable to recover it. 00:39:47.121 [2024-07-22 20:46:58.970107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.970117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.970277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.970289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.970366] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:47.122 [2024-07-22 20:46:58.970619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.970630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.970994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.971004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.971393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.971419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.971806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.971818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.972177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.972189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.972549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.972561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.972824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.972835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.973196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.973210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.973533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.973543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.973911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.973922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.974299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.974311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.974687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.974697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.974933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.974942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.975318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.975328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.975538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.975548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.975939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.975950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.976160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.976170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.976554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.122 [2024-07-22 20:46:58.976564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.122 qpair failed and we were unable to recover it. 00:39:47.122 [2024-07-22 20:46:58.976951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.976962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.977279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.977290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.977656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.977666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.978049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.978059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.978282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.978292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.978603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.978614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.978987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.978999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.979269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.979280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.123 [2024-07-22 20:46:58.979643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.979655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:47.123 [2024-07-22 20:46:58.980040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.980052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.123 [2024-07-22 20:46:58.980305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.980316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:47.123 [2024-07-22 20:46:58.980553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.980564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.980951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.980961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.981230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.981241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.981611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.981623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.982008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.982018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.982403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.982415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.982775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.982787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.982872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.982881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.983208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.983220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.983501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.983512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.983869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.983879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.984081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.984091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.984449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.984460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.123 [2024-07-22 20:46:58.984874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.123 [2024-07-22 20:46:58.984885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.123 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.985272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.985282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.985678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.985689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.985910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.985920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.986124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.986134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.986470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.986482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.986736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.986746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.987131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.987142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.987328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.987339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.987692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.987704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.988085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.988096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.988455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.988466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.988828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.988840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.989106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.989117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.989186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.989198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.989563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.989575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.989940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.989952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.990175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.990186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.990570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.990582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.991008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.991019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.991370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.124 [2024-07-22 20:46:58.991381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:47.124 [2024-07-22 20:46:58.991773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.991785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.124 [2024-07-22 20:46:58.992150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.992161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 20:46:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:47.124 [2024-07-22 20:46:58.992536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.992547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.992925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.992940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.124 qpair failed and we were unable to recover it. 00:39:47.124 [2024-07-22 20:46:58.993323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.124 [2024-07-22 20:46:58.993334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.993599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.993609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.993975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.993985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.994206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.994217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.994547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.994557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.994713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.994723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.995096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.995107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.995304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.995315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.995637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.995647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.996008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.996019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.996405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.996416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.996774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.996784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.997145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.997155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.997498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.997509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.997881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.997893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.998297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.998307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.998698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.998709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.999069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.999079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.999303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.999313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.999523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.999535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:58.999902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:58.999913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:59.000128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:59.000138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:59.000514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:59.000525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:59.000885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:59.000895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:59.001256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:59.001268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:59.001656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:59.001667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.125 [2024-07-22 20:46:59.002027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.125 [2024-07-22 20:46:59.002037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.125 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.002403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.002415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.002643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.002653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.002851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.002861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.003099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.003110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.003467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 20:46:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.126 [2024-07-22 20:46:59.003478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.003683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.003693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 20:46:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:47.126 [2024-07-22 20:46:59.004072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.004083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 20:46:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.126 20:46:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:47.126 [2024-07-22 20:46:59.004443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.004454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.004651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.004662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.004855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.004865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.005245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.005256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.005452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.005462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.005781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.005792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.005905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.005915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.006136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.006146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.006502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.006513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.006862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.006873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.007239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.007250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.007552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.007562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.007771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.007782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.008149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.008159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.008526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.008538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.008744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.008754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.009180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.009191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.126 qpair failed and we were unable to recover it. 00:39:47.126 [2024-07-22 20:46:59.009404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.126 [2024-07-22 20:46:59.009415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.127 qpair failed and we were unable to recover it. 00:39:47.127 [2024-07-22 20:46:59.009792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.127 [2024-07-22 20:46:59.009802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.127 qpair failed and we were unable to recover it. 00:39:47.127 [2024-07-22 20:46:59.010163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.127 [2024-07-22 20:46:59.010174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.127 qpair failed and we were unable to recover it. 00:39:47.127 [2024-07-22 20:46:59.010536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:47.127 [2024-07-22 20:46:59.010547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:39:47.127 qpair failed and we were unable to recover it. 00:39:47.127 [2024-07-22 20:46:59.010674] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:47.127 20:46:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.127 20:46:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:47.127 20:46:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.127 20:46:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:47.127 [2024-07-22 20:46:59.021850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.127 [2024-07-22 20:46:59.021965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.127 [2024-07-22 20:46:59.021986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.127 [2024-07-22 20:46:59.021998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.127 [2024-07-22 20:46:59.022005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.127 [2024-07-22 20:46:59.022028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.127 qpair failed and we were unable to recover it. 00:39:47.127 20:46:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.127 20:46:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3902552 00:39:47.127 [2024-07-22 20:46:59.031765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.127 [2024-07-22 20:46:59.031851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.127 [2024-07-22 20:46:59.031867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.127 [2024-07-22 20:46:59.031876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.127 [2024-07-22 20:46:59.031883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.127 [2024-07-22 20:46:59.031900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.127 qpair failed and we were unable to recover it. 00:39:47.127 [2024-07-22 20:46:59.041765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.127 [2024-07-22 20:46:59.041851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.127 [2024-07-22 20:46:59.041868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.127 [2024-07-22 20:46:59.041876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.127 [2024-07-22 20:46:59.041883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.127 [2024-07-22 20:46:59.041900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.127 qpair failed and we were unable to recover it. 00:39:47.127 [2024-07-22 20:46:59.051765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.127 [2024-07-22 20:46:59.051851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.127 [2024-07-22 20:46:59.051868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.127 [2024-07-22 20:46:59.051880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.127 [2024-07-22 20:46:59.051888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.127 [2024-07-22 20:46:59.051904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.127 qpair failed and we were unable to recover it. 00:39:47.127 [2024-07-22 20:46:59.061789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.127 [2024-07-22 20:46:59.061874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.127 [2024-07-22 20:46:59.061890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.127 [2024-07-22 20:46:59.061899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.127 [2024-07-22 20:46:59.061905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.127 [2024-07-22 20:46:59.061922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.127 qpair failed and we were unable to recover it. 00:39:47.127 [2024-07-22 20:46:59.071861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.127 [2024-07-22 20:46:59.071969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.127 [2024-07-22 20:46:59.071993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.127 [2024-07-22 20:46:59.072004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.127 [2024-07-22 20:46:59.072011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.127 [2024-07-22 20:46:59.072033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.127 qpair failed and we were unable to recover it. 00:39:47.127 [2024-07-22 20:46:59.081826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.127 [2024-07-22 20:46:59.081975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.127 [2024-07-22 20:46:59.081993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.127 [2024-07-22 20:46:59.082001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.127 [2024-07-22 20:46:59.082008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.127 [2024-07-22 20:46:59.082024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.127 qpair failed and we were unable to recover it. 00:39:47.127 [2024-07-22 20:46:59.091924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.127 [2024-07-22 20:46:59.092014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.128 [2024-07-22 20:46:59.092037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.128 [2024-07-22 20:46:59.092048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.128 [2024-07-22 20:46:59.092057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.128 [2024-07-22 20:46:59.092079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.128 qpair failed and we were unable to recover it. 00:39:47.128 [2024-07-22 20:46:59.101879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.128 [2024-07-22 20:46:59.101962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.128 [2024-07-22 20:46:59.101979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.128 [2024-07-22 20:46:59.101993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.128 [2024-07-22 20:46:59.102000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.128 [2024-07-22 20:46:59.102018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.128 qpair failed and we were unable to recover it. 00:39:47.128 [2024-07-22 20:46:59.111919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.128 [2024-07-22 20:46:59.111999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.128 [2024-07-22 20:46:59.112017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.128 [2024-07-22 20:46:59.112026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.128 [2024-07-22 20:46:59.112033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.128 [2024-07-22 20:46:59.112049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.128 qpair failed and we were unable to recover it. 00:39:47.391 [2024-07-22 20:46:59.121937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.391 [2024-07-22 20:46:59.122014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.391 [2024-07-22 20:46:59.122030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.391 [2024-07-22 20:46:59.122039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.391 [2024-07-22 20:46:59.122045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.391 [2024-07-22 20:46:59.122061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.391 qpair failed and we were unable to recover it. 00:39:47.391 [2024-07-22 20:46:59.131933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.391 [2024-07-22 20:46:59.132011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.391 [2024-07-22 20:46:59.132026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.391 [2024-07-22 20:46:59.132035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.391 [2024-07-22 20:46:59.132041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.391 [2024-07-22 20:46:59.132058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.391 qpair failed and we were unable to recover it. 00:39:47.391 [2024-07-22 20:46:59.141919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.391 [2024-07-22 20:46:59.141999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.391 [2024-07-22 20:46:59.142015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.391 [2024-07-22 20:46:59.142023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.391 [2024-07-22 20:46:59.142030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.391 [2024-07-22 20:46:59.142046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.391 qpair failed and we were unable to recover it. 00:39:47.391 [2024-07-22 20:46:59.152023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.391 [2024-07-22 20:46:59.152100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.391 [2024-07-22 20:46:59.152116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.391 [2024-07-22 20:46:59.152125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.391 [2024-07-22 20:46:59.152130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.391 [2024-07-22 20:46:59.152146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.391 qpair failed and we were unable to recover it. 00:39:47.391 [2024-07-22 20:46:59.162088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.391 [2024-07-22 20:46:59.162166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.391 [2024-07-22 20:46:59.162182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.391 [2024-07-22 20:46:59.162190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.391 [2024-07-22 20:46:59.162196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.391 [2024-07-22 20:46:59.162218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.391 qpair failed and we were unable to recover it. 00:39:47.391 [2024-07-22 20:46:59.172076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.391 [2024-07-22 20:46:59.172152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.391 [2024-07-22 20:46:59.172168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.391 [2024-07-22 20:46:59.172176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.391 [2024-07-22 20:46:59.172183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.391 [2024-07-22 20:46:59.172198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.391 qpair failed and we were unable to recover it. 00:39:47.391 [2024-07-22 20:46:59.182051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.391 [2024-07-22 20:46:59.182140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.391 [2024-07-22 20:46:59.182156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.391 [2024-07-22 20:46:59.182164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.391 [2024-07-22 20:46:59.182171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.391 [2024-07-22 20:46:59.182186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.391 qpair failed and we were unable to recover it. 00:39:47.391 [2024-07-22 20:46:59.192109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.391 [2024-07-22 20:46:59.192196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.391 [2024-07-22 20:46:59.192217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.391 [2024-07-22 20:46:59.192229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.391 [2024-07-22 20:46:59.192235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.391 [2024-07-22 20:46:59.192252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.391 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.202139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.202224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.202240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.202248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.202255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.202271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.212219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.212302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.212319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.212327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.212333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.212349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.222315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.222448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.222464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.222473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.222479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.222494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.232263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.232344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.232360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.232368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.232374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.232390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.242247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.242319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.242335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.242343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.242350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.242365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.252309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.252394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.252409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.252417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.252423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.252439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.262335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.262478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.262494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.262502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.262508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.262524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.272361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.272443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.272459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.272468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.272475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.272490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.282382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.282466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.282484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.282492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.282499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.282514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.292328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.292425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.292441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.292449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.292456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.292471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.302464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.302580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.302596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.302604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.302610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.302626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.312546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.312623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.312640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.312648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.312654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.312670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.322514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.322594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.392 [2024-07-22 20:46:59.322610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.392 [2024-07-22 20:46:59.322618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.392 [2024-07-22 20:46:59.322624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.392 [2024-07-22 20:46:59.322643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.392 qpair failed and we were unable to recover it. 00:39:47.392 [2024-07-22 20:46:59.332640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.392 [2024-07-22 20:46:59.332718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.393 [2024-07-22 20:46:59.332734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.393 [2024-07-22 20:46:59.332742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.393 [2024-07-22 20:46:59.332748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.393 [2024-07-22 20:46:59.332766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.393 qpair failed and we were unable to recover it. 00:39:47.393 [2024-07-22 20:46:59.342532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.393 [2024-07-22 20:46:59.342621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.393 [2024-07-22 20:46:59.342637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.393 [2024-07-22 20:46:59.342644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.393 [2024-07-22 20:46:59.342651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.393 [2024-07-22 20:46:59.342666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.393 qpair failed and we were unable to recover it. 00:39:47.393 [2024-07-22 20:46:59.352575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.393 [2024-07-22 20:46:59.352690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.393 [2024-07-22 20:46:59.352706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.393 [2024-07-22 20:46:59.352714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.393 [2024-07-22 20:46:59.352719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.393 [2024-07-22 20:46:59.352734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.393 qpair failed and we were unable to recover it. 00:39:47.393 [2024-07-22 20:46:59.362624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.393 [2024-07-22 20:46:59.362765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.393 [2024-07-22 20:46:59.362782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.393 [2024-07-22 20:46:59.362790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.393 [2024-07-22 20:46:59.362795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.393 [2024-07-22 20:46:59.362810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.393 qpair failed and we were unable to recover it. 00:39:47.393 [2024-07-22 20:46:59.372639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.393 [2024-07-22 20:46:59.372721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.393 [2024-07-22 20:46:59.372741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.393 [2024-07-22 20:46:59.372749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.393 [2024-07-22 20:46:59.372755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.393 [2024-07-22 20:46:59.372771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.393 qpair failed and we were unable to recover it. 00:39:47.393 [2024-07-22 20:46:59.382674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.393 [2024-07-22 20:46:59.382802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.393 [2024-07-22 20:46:59.382818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.393 [2024-07-22 20:46:59.382826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.393 [2024-07-22 20:46:59.382832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.393 [2024-07-22 20:46:59.382848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.393 qpair failed and we were unable to recover it. 00:39:47.393 [2024-07-22 20:46:59.392749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.393 [2024-07-22 20:46:59.392861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.393 [2024-07-22 20:46:59.392877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.393 [2024-07-22 20:46:59.392885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.393 [2024-07-22 20:46:59.392891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.393 [2024-07-22 20:46:59.392907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.393 qpair failed and we were unable to recover it. 00:39:47.393 [2024-07-22 20:46:59.402722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.393 [2024-07-22 20:46:59.402813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.393 [2024-07-22 20:46:59.402837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.393 [2024-07-22 20:46:59.402846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.393 [2024-07-22 20:46:59.402854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.393 [2024-07-22 20:46:59.402873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.393 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.412966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.413055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.413078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.413088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.413098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.413119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.422772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.422854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.422872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.422880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.422887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.422904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.432813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.432965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.432981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.432989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.432995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.433011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.442826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.442914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.442930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.442938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.442944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.442959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.452860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.452940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.452955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.452964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.452970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.452985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.462895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.462985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.463001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.463009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.463015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.463031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.472874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.472956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.472972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.472980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.472986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.473001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.482955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.483028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.483044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.483052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.483058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.483073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.493009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.493158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.493174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.493182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.493188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.493210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.503024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.503107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.503123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.503131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.503139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.503155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.513053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.513133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.513148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.513156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.513161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.513176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.523069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.523147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.523164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.523174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.523180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.523196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.533075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.533155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.533171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.533180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.533186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.533207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.543109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.543198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.656 [2024-07-22 20:46:59.543224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.656 [2024-07-22 20:46:59.543233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.656 [2024-07-22 20:46:59.543239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.656 [2024-07-22 20:46:59.543255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.656 qpair failed and we were unable to recover it. 00:39:47.656 [2024-07-22 20:46:59.553167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.656 [2024-07-22 20:46:59.553283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.553300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.553308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.553314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.553330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.563154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.563257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.563273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.563281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.563292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.563308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.573156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.573244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.573260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.573268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.573274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.573289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.583220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.583333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.583349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.583357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.583363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.583378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.593178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.593262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.593278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.593288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.593294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.593310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.603299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.603375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.603391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.603399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.603404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.603419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.613302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.613382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.613397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.613406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.613412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.613427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.623365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.623450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.623466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.623474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.623480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.623495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.633362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.633436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.633452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.633460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.633465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.633480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.643417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.643495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.643511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.643519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.643524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.643540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.653448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.653539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.653554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.653562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.653568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.653583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.663374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.663455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.663471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.663478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.663484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.663499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.657 [2024-07-22 20:46:59.673605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.657 [2024-07-22 20:46:59.673680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.657 [2024-07-22 20:46:59.673696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.657 [2024-07-22 20:46:59.673704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.657 [2024-07-22 20:46:59.673710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.657 [2024-07-22 20:46:59.673726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.657 qpair failed and we were unable to recover it. 00:39:47.919 [2024-07-22 20:46:59.683548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.919 [2024-07-22 20:46:59.683638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.919 [2024-07-22 20:46:59.683656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.919 [2024-07-22 20:46:59.683664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.919 [2024-07-22 20:46:59.683670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.919 [2024-07-22 20:46:59.683684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.919 qpair failed and we were unable to recover it. 00:39:47.919 [2024-07-22 20:46:59.693536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.919 [2024-07-22 20:46:59.693617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.919 [2024-07-22 20:46:59.693633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.919 [2024-07-22 20:46:59.693640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.919 [2024-07-22 20:46:59.693646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.919 [2024-07-22 20:46:59.693661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.919 qpair failed and we were unable to recover it. 00:39:47.919 [2024-07-22 20:46:59.703550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.919 [2024-07-22 20:46:59.703643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.919 [2024-07-22 20:46:59.703661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.919 [2024-07-22 20:46:59.703669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.919 [2024-07-22 20:46:59.703675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.919 [2024-07-22 20:46:59.703690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.919 qpair failed and we were unable to recover it. 00:39:47.919 [2024-07-22 20:46:59.713574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.919 [2024-07-22 20:46:59.713669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.919 [2024-07-22 20:46:59.713685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.919 [2024-07-22 20:46:59.713692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.919 [2024-07-22 20:46:59.713698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.919 [2024-07-22 20:46:59.713713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.919 qpair failed and we were unable to recover it. 00:39:47.919 [2024-07-22 20:46:59.723627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.919 [2024-07-22 20:46:59.723706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.919 [2024-07-22 20:46:59.723722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.919 [2024-07-22 20:46:59.723729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.919 [2024-07-22 20:46:59.723735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.723753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.733659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.733740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.733755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.733763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.733769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.733784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.743699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.743811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.743826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.743835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.743841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.743856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.753697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.753768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.753784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.753792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.753798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.753814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.763726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.763798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.763814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.763822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.763828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.763843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.773661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.773740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.773760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.773768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.773774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.773789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.783951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.784057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.784073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.784082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.784087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.784102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.793836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.793916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.793931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.793939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.793945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.793960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.803851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.803940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.803963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.803973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.803979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.803998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.813809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.813910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.813927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.813935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.813945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.813961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.823926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.824053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.824074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.824082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.824088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.824104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.833924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.834073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.834089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.834097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.834103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.834118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.844140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.844293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.844310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.844317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.920 [2024-07-22 20:46:59.844323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.920 [2024-07-22 20:46:59.844339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.920 qpair failed and we were unable to recover it. 00:39:47.920 [2024-07-22 20:46:59.853976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.920 [2024-07-22 20:46:59.854053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.920 [2024-07-22 20:46:59.854069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.920 [2024-07-22 20:46:59.854077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.921 [2024-07-22 20:46:59.854083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.921 [2024-07-22 20:46:59.854098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.921 qpair failed and we were unable to recover it. 00:39:47.921 [2024-07-22 20:46:59.863952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.921 [2024-07-22 20:46:59.864033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.921 [2024-07-22 20:46:59.864049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.921 [2024-07-22 20:46:59.864056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.921 [2024-07-22 20:46:59.864062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.921 [2024-07-22 20:46:59.864078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.921 qpair failed and we were unable to recover it. 00:39:47.921 [2024-07-22 20:46:59.874036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.921 [2024-07-22 20:46:59.874114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.921 [2024-07-22 20:46:59.874130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.921 [2024-07-22 20:46:59.874138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.921 [2024-07-22 20:46:59.874144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.921 [2024-07-22 20:46:59.874159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.921 qpair failed and we were unable to recover it. 00:39:47.921 [2024-07-22 20:46:59.884048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.921 [2024-07-22 20:46:59.884122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.921 [2024-07-22 20:46:59.884138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.921 [2024-07-22 20:46:59.884146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.921 [2024-07-22 20:46:59.884152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.921 [2024-07-22 20:46:59.884167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.921 qpair failed and we were unable to recover it. 00:39:47.921 [2024-07-22 20:46:59.894106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.921 [2024-07-22 20:46:59.894194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.921 [2024-07-22 20:46:59.894214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.921 [2024-07-22 20:46:59.894222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.921 [2024-07-22 20:46:59.894229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.921 [2024-07-22 20:46:59.894245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.921 qpair failed and we were unable to recover it. 00:39:47.921 [2024-07-22 20:46:59.904118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.921 [2024-07-22 20:46:59.904197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.921 [2024-07-22 20:46:59.904219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.921 [2024-07-22 20:46:59.904227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.921 [2024-07-22 20:46:59.904235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.921 [2024-07-22 20:46:59.904251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.921 qpair failed and we were unable to recover it. 00:39:47.921 [2024-07-22 20:46:59.914151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.921 [2024-07-22 20:46:59.914231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.921 [2024-07-22 20:46:59.914247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.921 [2024-07-22 20:46:59.914255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.921 [2024-07-22 20:46:59.914261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.921 [2024-07-22 20:46:59.914276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.921 qpair failed and we were unable to recover it. 00:39:47.921 [2024-07-22 20:46:59.924124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.921 [2024-07-22 20:46:59.924206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.921 [2024-07-22 20:46:59.924222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.921 [2024-07-22 20:46:59.924230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.921 [2024-07-22 20:46:59.924236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.921 [2024-07-22 20:46:59.924251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.921 qpair failed and we were unable to recover it. 00:39:47.921 [2024-07-22 20:46:59.934190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:47.921 [2024-07-22 20:46:59.934275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:47.921 [2024-07-22 20:46:59.934291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:47.921 [2024-07-22 20:46:59.934299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:47.921 [2024-07-22 20:46:59.934305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:47.921 [2024-07-22 20:46:59.934321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:47.921 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:46:59.944204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:46:59.944289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:46:59.944305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:46:59.944313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:46:59.944319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:46:59.944335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:46:59.954242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:46:59.954318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:46:59.954334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:46:59.954342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:46:59.954347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:46:59.954363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:46:59.964257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:46:59.964329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:46:59.964345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:46:59.964352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:46:59.964358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:46:59.964373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:46:59.974532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:46:59.974638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:46:59.974653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:46:59.974661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:46:59.974667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:46:59.974682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:46:59.984307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:46:59.984421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:46:59.984437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:46:59.984444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:46:59.984450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:46:59.984466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:46:59.994316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:46:59.994395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:46:59.994411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:46:59.994421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:46:59.994427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:46:59.994442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:47:00.004391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:47:00.004476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:47:00.004493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:47:00.004502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:47:00.004508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:47:00.004524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:47:00.014453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:47:00.014552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:47:00.014567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:47:00.014575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:47:00.014581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:47:00.014597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:47:00.024427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:47:00.024510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:47:00.024526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:47:00.024534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:47:00.024540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:47:00.024556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:47:00.034372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:47:00.034447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:47:00.034463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:47:00.034471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:47:00.034477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:47:00.034492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:47:00.044508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:47:00.044586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:47:00.044601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:47:00.044610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:47:00.044616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:47:00.044631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.184 [2024-07-22 20:47:00.054426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.184 [2024-07-22 20:47:00.054503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.184 [2024-07-22 20:47:00.054519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.184 [2024-07-22 20:47:00.054526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.184 [2024-07-22 20:47:00.054532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.184 [2024-07-22 20:47:00.054548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.184 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.064539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.064630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.064646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.064654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.064660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.064675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.074589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.074665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.074681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.074689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.074695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.074716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.084511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.084583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.084601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.084609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.084615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.084631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.094605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.094684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.094699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.094707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.094713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.094728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.104655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.104735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.104750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.104759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.104765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.104780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.114680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.114760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.114776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.114784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.114790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.114805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.124686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.124777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.124792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.124800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.124806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.124823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.134733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.134813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.134829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.134837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.134842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.134857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.144787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.144872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.144887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.144895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.144901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.144917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.154747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.154835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.154851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.154859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.154864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.154880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.164805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.164887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.164904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.164912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.164918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.164933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.174835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.174922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.174948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.174958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.174965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.174985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.185 [2024-07-22 20:47:00.184863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.185 [2024-07-22 20:47:00.184948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.185 [2024-07-22 20:47:00.184971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.185 [2024-07-22 20:47:00.184980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.185 [2024-07-22 20:47:00.184987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.185 [2024-07-22 20:47:00.185007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.185 qpair failed and we were unable to recover it. 00:39:48.186 [2024-07-22 20:47:00.194895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.186 [2024-07-22 20:47:00.194981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.186 [2024-07-22 20:47:00.195004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.186 [2024-07-22 20:47:00.195013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.186 [2024-07-22 20:47:00.195019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.186 [2024-07-22 20:47:00.195039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.186 qpair failed and we were unable to recover it. 00:39:48.450 [2024-07-22 20:47:00.204908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.450 [2024-07-22 20:47:00.205012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.450 [2024-07-22 20:47:00.205035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.450 [2024-07-22 20:47:00.205045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.450 [2024-07-22 20:47:00.205052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.450 [2024-07-22 20:47:00.205072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.450 qpair failed and we were unable to recover it. 00:39:48.450 [2024-07-22 20:47:00.214965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.450 [2024-07-22 20:47:00.215090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.450 [2024-07-22 20:47:00.215107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.450 [2024-07-22 20:47:00.215115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.450 [2024-07-22 20:47:00.215122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.450 [2024-07-22 20:47:00.215142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.450 qpair failed and we were unable to recover it. 00:39:48.450 [2024-07-22 20:47:00.224955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.450 [2024-07-22 20:47:00.225047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.450 [2024-07-22 20:47:00.225068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.450 [2024-07-22 20:47:00.225076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.450 [2024-07-22 20:47:00.225082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.450 [2024-07-22 20:47:00.225098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.450 qpair failed and we were unable to recover it. 00:39:48.450 [2024-07-22 20:47:00.234999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.450 [2024-07-22 20:47:00.235074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.450 [2024-07-22 20:47:00.235090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.450 [2024-07-22 20:47:00.235098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.450 [2024-07-22 20:47:00.235104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.450 [2024-07-22 20:47:00.235119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.450 qpair failed and we were unable to recover it. 00:39:48.450 [2024-07-22 20:47:00.245032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.450 [2024-07-22 20:47:00.245115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.450 [2024-07-22 20:47:00.245131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.450 [2024-07-22 20:47:00.245139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.450 [2024-07-22 20:47:00.245145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.450 [2024-07-22 20:47:00.245161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.450 qpair failed and we were unable to recover it. 00:39:48.450 [2024-07-22 20:47:00.255042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.450 [2024-07-22 20:47:00.255116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.450 [2024-07-22 20:47:00.255132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.450 [2024-07-22 20:47:00.255140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.450 [2024-07-22 20:47:00.255146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.450 [2024-07-22 20:47:00.255161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.450 qpair failed and we were unable to recover it. 00:39:48.450 [2024-07-22 20:47:00.265081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.450 [2024-07-22 20:47:00.265164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.450 [2024-07-22 20:47:00.265180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.450 [2024-07-22 20:47:00.265188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.450 [2024-07-22 20:47:00.265194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.450 [2024-07-22 20:47:00.265215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.450 qpair failed and we were unable to recover it. 00:39:48.450 [2024-07-22 20:47:00.275109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.450 [2024-07-22 20:47:00.275189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.450 [2024-07-22 20:47:00.275210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.450 [2024-07-22 20:47:00.275218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.450 [2024-07-22 20:47:00.275224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.450 [2024-07-22 20:47:00.275240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.450 qpair failed and we were unable to recover it. 00:39:48.450 [2024-07-22 20:47:00.285066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.450 [2024-07-22 20:47:00.285174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.450 [2024-07-22 20:47:00.285190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.450 [2024-07-22 20:47:00.285198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.450 [2024-07-22 20:47:00.285209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.450 [2024-07-22 20:47:00.285225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.450 qpair failed and we were unable to recover it. 00:39:48.450 [2024-07-22 20:47:00.295197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.450 [2024-07-22 20:47:00.295284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.450 [2024-07-22 20:47:00.295299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.450 [2024-07-22 20:47:00.295308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.450 [2024-07-22 20:47:00.295314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.450 [2024-07-22 20:47:00.295329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.450 qpair failed and we were unable to recover it. 00:39:48.450 [2024-07-22 20:47:00.305182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.450 [2024-07-22 20:47:00.305271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.305288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.305295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.305304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.305321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.315238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.315313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.315329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.315337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.315343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.315358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.325259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.325336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.325352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.325359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.325365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.325381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.335265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.335341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.335357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.335370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.335376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.335392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.345333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.345462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.345477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.345485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.345491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.345507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.355352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.355434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.355449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.355457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.355463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.355478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.365367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.365449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.365466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.365473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.365479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.365494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.375412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.375497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.375513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.375521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.375527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.375542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.385331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.385421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.385437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.385445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.385451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.385466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.395512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.395588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.395603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.395614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.395620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.395636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.405482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.405569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.405585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.405593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.405598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.405614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.415527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.415604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.415620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.415628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.415633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.415649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.425540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.425628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.425644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.425651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.451 [2024-07-22 20:47:00.425657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.451 [2024-07-22 20:47:00.425672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.451 qpair failed and we were unable to recover it. 00:39:48.451 [2024-07-22 20:47:00.435566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.451 [2024-07-22 20:47:00.435645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.451 [2024-07-22 20:47:00.435661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.451 [2024-07-22 20:47:00.435668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.452 [2024-07-22 20:47:00.435674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.452 [2024-07-22 20:47:00.435689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.452 qpair failed and we were unable to recover it. 00:39:48.452 [2024-07-22 20:47:00.445595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.452 [2024-07-22 20:47:00.445672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.452 [2024-07-22 20:47:00.445688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.452 [2024-07-22 20:47:00.445696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.452 [2024-07-22 20:47:00.445702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.452 [2024-07-22 20:47:00.445731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.452 qpair failed and we were unable to recover it. 00:39:48.452 [2024-07-22 20:47:00.455676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.452 [2024-07-22 20:47:00.455774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.452 [2024-07-22 20:47:00.455790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.452 [2024-07-22 20:47:00.455798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.452 [2024-07-22 20:47:00.455804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.452 [2024-07-22 20:47:00.455820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.452 qpair failed and we were unable to recover it. 00:39:48.452 [2024-07-22 20:47:00.465654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.452 [2024-07-22 20:47:00.465744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.452 [2024-07-22 20:47:00.465760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.452 [2024-07-22 20:47:00.465768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.452 [2024-07-22 20:47:00.465773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.452 [2024-07-22 20:47:00.465789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.452 qpair failed and we were unable to recover it. 00:39:48.714 [2024-07-22 20:47:00.475709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.714 [2024-07-22 20:47:00.475854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.714 [2024-07-22 20:47:00.475871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.714 [2024-07-22 20:47:00.475879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.714 [2024-07-22 20:47:00.475885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.714 [2024-07-22 20:47:00.475900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.714 qpair failed and we were unable to recover it. 00:39:48.714 [2024-07-22 20:47:00.485716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.714 [2024-07-22 20:47:00.485792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.714 [2024-07-22 20:47:00.485808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.714 [2024-07-22 20:47:00.485818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.714 [2024-07-22 20:47:00.485824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.714 [2024-07-22 20:47:00.485840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.714 qpair failed and we were unable to recover it. 00:39:48.714 [2024-07-22 20:47:00.495746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.714 [2024-07-22 20:47:00.495824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.715 [2024-07-22 20:47:00.495840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.715 [2024-07-22 20:47:00.495848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.715 [2024-07-22 20:47:00.495854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.715 [2024-07-22 20:47:00.495869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.715 qpair failed and we were unable to recover it. 00:39:48.715 [2024-07-22 20:47:00.505817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.715 [2024-07-22 20:47:00.505894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.715 [2024-07-22 20:47:00.505911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.715 [2024-07-22 20:47:00.505918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.715 [2024-07-22 20:47:00.505924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.715 [2024-07-22 20:47:00.505940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.715 qpair failed and we were unable to recover it. 00:39:48.715 [2024-07-22 20:47:00.515803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.715 [2024-07-22 20:47:00.515883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.715 [2024-07-22 20:47:00.515905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.715 [2024-07-22 20:47:00.515915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.715 [2024-07-22 20:47:00.515922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.715 [2024-07-22 20:47:00.515941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.715 qpair failed and we were unable to recover it. 00:39:48.715 [2024-07-22 20:47:00.525834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.715 [2024-07-22 20:47:00.525920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.715 [2024-07-22 20:47:00.525943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.715 [2024-07-22 20:47:00.525953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.715 [2024-07-22 20:47:00.525960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.715 [2024-07-22 20:47:00.525980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.715 qpair failed and we were unable to recover it. 00:39:48.715 [2024-07-22 20:47:00.535871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.715 [2024-07-22 20:47:00.535956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.715 [2024-07-22 20:47:00.535979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.715 [2024-07-22 20:47:00.535988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.715 [2024-07-22 20:47:00.535996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.715 [2024-07-22 20:47:00.536015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.715 qpair failed and we were unable to recover it. 00:39:48.715 [2024-07-22 20:47:00.545843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.715 [2024-07-22 20:47:00.545925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.715 [2024-07-22 20:47:00.545944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.715 [2024-07-22 20:47:00.545952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.715 [2024-07-22 20:47:00.545958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.715 [2024-07-22 20:47:00.545976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.715 qpair failed and we were unable to recover it. 00:39:48.715 [2024-07-22 20:47:00.556023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.715 [2024-07-22 20:47:00.556105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.715 [2024-07-22 20:47:00.556128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.715 [2024-07-22 20:47:00.556138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.715 [2024-07-22 20:47:00.556145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.715 [2024-07-22 20:47:00.556165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.715 qpair failed and we were unable to recover it. 00:39:48.715 [2024-07-22 20:47:00.565898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.715 [2024-07-22 20:47:00.566018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.715 [2024-07-22 20:47:00.566036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.715 [2024-07-22 20:47:00.566044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.715 [2024-07-22 20:47:00.566050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.715 [2024-07-22 20:47:00.566067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.715 qpair failed and we were unable to recover it. 00:39:48.715 [2024-07-22 20:47:00.575970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.715 [2024-07-22 20:47:00.576047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.715 [2024-07-22 20:47:00.576067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.715 [2024-07-22 20:47:00.576076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.715 [2024-07-22 20:47:00.576082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.715 [2024-07-22 20:47:00.576098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.715 qpair failed and we were unable to recover it. 00:39:48.715 [2024-07-22 20:47:00.585968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.715 [2024-07-22 20:47:00.586045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.715 [2024-07-22 20:47:00.586062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.715 [2024-07-22 20:47:00.586070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.715 [2024-07-22 20:47:00.586078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.715 [2024-07-22 20:47:00.586095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.715 qpair failed and we were unable to recover it. 00:39:48.716 [2024-07-22 20:47:00.596036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.716 [2024-07-22 20:47:00.596114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.716 [2024-07-22 20:47:00.596131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.716 [2024-07-22 20:47:00.596139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.716 [2024-07-22 20:47:00.596145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.716 [2024-07-22 20:47:00.596160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.716 qpair failed and we were unable to recover it. 00:39:48.716 [2024-07-22 20:47:00.605980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.716 [2024-07-22 20:47:00.606060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.716 [2024-07-22 20:47:00.606076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.716 [2024-07-22 20:47:00.606084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.716 [2024-07-22 20:47:00.606090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.716 [2024-07-22 20:47:00.606104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.716 qpair failed and we were unable to recover it. 00:39:48.716 [2024-07-22 20:47:00.616018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.716 [2024-07-22 20:47:00.616096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.716 [2024-07-22 20:47:00.616112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.716 [2024-07-22 20:47:00.616122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.716 [2024-07-22 20:47:00.616128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.716 [2024-07-22 20:47:00.616147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.716 qpair failed and we were unable to recover it. 00:39:48.716 [2024-07-22 20:47:00.626123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.716 [2024-07-22 20:47:00.626217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.716 [2024-07-22 20:47:00.626234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.716 [2024-07-22 20:47:00.626243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.716 [2024-07-22 20:47:00.626249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.716 [2024-07-22 20:47:00.626265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.716 qpair failed and we were unable to recover it. 00:39:48.716 [2024-07-22 20:47:00.636142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.716 [2024-07-22 20:47:00.636229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.716 [2024-07-22 20:47:00.636245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.716 [2024-07-22 20:47:00.636253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.716 [2024-07-22 20:47:00.636259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.716 [2024-07-22 20:47:00.636274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.716 qpair failed and we were unable to recover it. 00:39:48.716 [2024-07-22 20:47:00.646164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.716 [2024-07-22 20:47:00.646244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.716 [2024-07-22 20:47:00.646260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.716 [2024-07-22 20:47:00.646268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.716 [2024-07-22 20:47:00.646273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.716 [2024-07-22 20:47:00.646289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.716 qpair failed and we were unable to recover it. 00:39:48.716 [2024-07-22 20:47:00.656231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.716 [2024-07-22 20:47:00.656331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.716 [2024-07-22 20:47:00.656347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.716 [2024-07-22 20:47:00.656354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.716 [2024-07-22 20:47:00.656360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.716 [2024-07-22 20:47:00.656375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.716 qpair failed and we were unable to recover it. 00:39:48.716 [2024-07-22 20:47:00.666204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.716 [2024-07-22 20:47:00.666290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.716 [2024-07-22 20:47:00.666308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.716 [2024-07-22 20:47:00.666316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.716 [2024-07-22 20:47:00.666322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.716 [2024-07-22 20:47:00.666337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.716 qpair failed and we were unable to recover it. 00:39:48.716 [2024-07-22 20:47:00.676249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.716 [2024-07-22 20:47:00.676325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.716 [2024-07-22 20:47:00.676341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.716 [2024-07-22 20:47:00.676349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.716 [2024-07-22 20:47:00.676355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.716 [2024-07-22 20:47:00.676370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.716 qpair failed and we were unable to recover it. 00:39:48.716 [2024-07-22 20:47:00.686262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.716 [2024-07-22 20:47:00.686383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.716 [2024-07-22 20:47:00.686399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.717 [2024-07-22 20:47:00.686407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.717 [2024-07-22 20:47:00.686413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.717 [2024-07-22 20:47:00.686428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.717 qpair failed and we were unable to recover it. 00:39:48.717 [2024-07-22 20:47:00.696288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.717 [2024-07-22 20:47:00.696369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.717 [2024-07-22 20:47:00.696385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.717 [2024-07-22 20:47:00.696393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.717 [2024-07-22 20:47:00.696398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.717 [2024-07-22 20:47:00.696413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.717 qpair failed and we were unable to recover it. 00:39:48.717 [2024-07-22 20:47:00.706337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.717 [2024-07-22 20:47:00.706412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.717 [2024-07-22 20:47:00.706429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.717 [2024-07-22 20:47:00.706437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.717 [2024-07-22 20:47:00.706445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.717 [2024-07-22 20:47:00.706462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.717 qpair failed and we were unable to recover it. 00:39:48.717 [2024-07-22 20:47:00.716360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.717 [2024-07-22 20:47:00.716434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.717 [2024-07-22 20:47:00.716451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.717 [2024-07-22 20:47:00.716458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.717 [2024-07-22 20:47:00.716464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.717 [2024-07-22 20:47:00.716479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.717 qpair failed and we were unable to recover it. 00:39:48.717 [2024-07-22 20:47:00.726398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.717 [2024-07-22 20:47:00.726472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.717 [2024-07-22 20:47:00.726488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.717 [2024-07-22 20:47:00.726496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.717 [2024-07-22 20:47:00.726501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.717 [2024-07-22 20:47:00.726516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.717 qpair failed and we were unable to recover it. 00:39:48.980 [2024-07-22 20:47:00.736402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.980 [2024-07-22 20:47:00.736507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.980 [2024-07-22 20:47:00.736523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.980 [2024-07-22 20:47:00.736532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.980 [2024-07-22 20:47:00.736538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.980 [2024-07-22 20:47:00.736553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.980 qpair failed and we were unable to recover it. 00:39:48.980 [2024-07-22 20:47:00.746481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.980 [2024-07-22 20:47:00.746612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.980 [2024-07-22 20:47:00.746627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.980 [2024-07-22 20:47:00.746636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.980 [2024-07-22 20:47:00.746642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.980 [2024-07-22 20:47:00.746658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.980 qpair failed and we were unable to recover it. 00:39:48.980 [2024-07-22 20:47:00.756519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.980 [2024-07-22 20:47:00.756602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.980 [2024-07-22 20:47:00.756618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.980 [2024-07-22 20:47:00.756625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.980 [2024-07-22 20:47:00.756631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.980 [2024-07-22 20:47:00.756646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.980 qpair failed and we were unable to recover it. 00:39:48.980 [2024-07-22 20:47:00.766502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.980 [2024-07-22 20:47:00.766581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.980 [2024-07-22 20:47:00.766596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.980 [2024-07-22 20:47:00.766604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.980 [2024-07-22 20:47:00.766610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.980 [2024-07-22 20:47:00.766625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.980 qpair failed and we were unable to recover it. 00:39:48.980 [2024-07-22 20:47:00.776558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.980 [2024-07-22 20:47:00.776636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.980 [2024-07-22 20:47:00.776652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.980 [2024-07-22 20:47:00.776660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.980 [2024-07-22 20:47:00.776665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.980 [2024-07-22 20:47:00.776680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.980 qpair failed and we were unable to recover it. 00:39:48.980 [2024-07-22 20:47:00.786550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.980 [2024-07-22 20:47:00.786683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.980 [2024-07-22 20:47:00.786699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.980 [2024-07-22 20:47:00.786707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.980 [2024-07-22 20:47:00.786713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.980 [2024-07-22 20:47:00.786728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.980 qpair failed and we were unable to recover it. 00:39:48.980 [2024-07-22 20:47:00.796580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.980 [2024-07-22 20:47:00.796660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.980 [2024-07-22 20:47:00.796676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.980 [2024-07-22 20:47:00.796687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.980 [2024-07-22 20:47:00.796693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.980 [2024-07-22 20:47:00.796708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.980 qpair failed and we were unable to recover it. 00:39:48.980 [2024-07-22 20:47:00.806609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.980 [2024-07-22 20:47:00.806703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.980 [2024-07-22 20:47:00.806720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.980 [2024-07-22 20:47:00.806728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.980 [2024-07-22 20:47:00.806734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.980 [2024-07-22 20:47:00.806749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.980 qpair failed and we were unable to recover it. 00:39:48.980 [2024-07-22 20:47:00.816687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.980 [2024-07-22 20:47:00.816770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.981 [2024-07-22 20:47:00.816787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.981 [2024-07-22 20:47:00.816795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.981 [2024-07-22 20:47:00.816801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.981 [2024-07-22 20:47:00.816816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.981 qpair failed and we were unable to recover it. 00:39:48.981 [2024-07-22 20:47:00.826668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.981 [2024-07-22 20:47:00.826751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.981 [2024-07-22 20:47:00.826767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.981 [2024-07-22 20:47:00.826775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.981 [2024-07-22 20:47:00.826780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.981 [2024-07-22 20:47:00.826796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.981 qpair failed and we were unable to recover it. 00:39:48.981 [2024-07-22 20:47:00.836732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.981 [2024-07-22 20:47:00.836817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.981 [2024-07-22 20:47:00.836833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.981 [2024-07-22 20:47:00.836841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.981 [2024-07-22 20:47:00.836847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.981 [2024-07-22 20:47:00.836862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.981 qpair failed and we were unable to recover it. 00:39:48.981 [2024-07-22 20:47:00.846749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.981 [2024-07-22 20:47:00.846853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.981 [2024-07-22 20:47:00.846869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.981 [2024-07-22 20:47:00.846877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.981 [2024-07-22 20:47:00.846888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.981 [2024-07-22 20:47:00.846903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.981 qpair failed and we were unable to recover it. 00:39:48.981 [2024-07-22 20:47:00.856821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.981 [2024-07-22 20:47:00.856901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.981 [2024-07-22 20:47:00.856917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.981 [2024-07-22 20:47:00.856924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.981 [2024-07-22 20:47:00.856930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.981 [2024-07-22 20:47:00.856945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.981 qpair failed and we were unable to recover it. 00:39:48.981 [2024-07-22 20:47:00.866795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.981 [2024-07-22 20:47:00.866880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.981 [2024-07-22 20:47:00.866896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.981 [2024-07-22 20:47:00.866904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.981 [2024-07-22 20:47:00.866910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.981 [2024-07-22 20:47:00.866925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.981 qpair failed and we were unable to recover it. 00:39:48.981 [2024-07-22 20:47:00.876748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.981 [2024-07-22 20:47:00.876831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.981 [2024-07-22 20:47:00.876846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.981 [2024-07-22 20:47:00.876855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.981 [2024-07-22 20:47:00.876860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.981 [2024-07-22 20:47:00.876875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.981 qpair failed and we were unable to recover it. 00:39:48.981 [2024-07-22 20:47:00.886768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.981 [2024-07-22 20:47:00.886845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.981 [2024-07-22 20:47:00.886861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.981 [2024-07-22 20:47:00.886871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.981 [2024-07-22 20:47:00.886876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.981 [2024-07-22 20:47:00.886891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.981 qpair failed and we were unable to recover it. 00:39:48.981 [2024-07-22 20:47:00.896890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.981 [2024-07-22 20:47:00.896967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.981 [2024-07-22 20:47:00.896983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.981 [2024-07-22 20:47:00.896992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.981 [2024-07-22 20:47:00.896998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.982 [2024-07-22 20:47:00.897014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.982 qpair failed and we were unable to recover it. 00:39:48.982 [2024-07-22 20:47:00.906904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.982 [2024-07-22 20:47:00.906988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.982 [2024-07-22 20:47:00.907004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.982 [2024-07-22 20:47:00.907012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.982 [2024-07-22 20:47:00.907018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.982 [2024-07-22 20:47:00.907033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.982 qpair failed and we were unable to recover it. 00:39:48.982 [2024-07-22 20:47:00.916932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.982 [2024-07-22 20:47:00.917013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.982 [2024-07-22 20:47:00.917029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.982 [2024-07-22 20:47:00.917037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.982 [2024-07-22 20:47:00.917043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.982 [2024-07-22 20:47:00.917058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.982 qpair failed and we were unable to recover it. 00:39:48.982 [2024-07-22 20:47:00.926962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.982 [2024-07-22 20:47:00.927061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.982 [2024-07-22 20:47:00.927077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.982 [2024-07-22 20:47:00.927084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.982 [2024-07-22 20:47:00.927090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.982 [2024-07-22 20:47:00.927105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.982 qpair failed and we were unable to recover it. 00:39:48.982 [2024-07-22 20:47:00.936928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.982 [2024-07-22 20:47:00.937008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.982 [2024-07-22 20:47:00.937024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.982 [2024-07-22 20:47:00.937031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.982 [2024-07-22 20:47:00.937037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.982 [2024-07-22 20:47:00.937052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.982 qpair failed and we were unable to recover it. 00:39:48.982 [2024-07-22 20:47:00.946980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.982 [2024-07-22 20:47:00.947068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.982 [2024-07-22 20:47:00.947084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.982 [2024-07-22 20:47:00.947092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.982 [2024-07-22 20:47:00.947098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.982 [2024-07-22 20:47:00.947112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.982 qpair failed and we were unable to recover it. 00:39:48.982 [2024-07-22 20:47:00.957056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.982 [2024-07-22 20:47:00.957137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.982 [2024-07-22 20:47:00.957153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.982 [2024-07-22 20:47:00.957161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.982 [2024-07-22 20:47:00.957167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.982 [2024-07-22 20:47:00.957182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.982 qpair failed and we were unable to recover it. 00:39:48.982 [2024-07-22 20:47:00.967071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.982 [2024-07-22 20:47:00.967155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.982 [2024-07-22 20:47:00.967171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.982 [2024-07-22 20:47:00.967178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.982 [2024-07-22 20:47:00.967184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.982 [2024-07-22 20:47:00.967205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.982 qpair failed and we were unable to recover it. 00:39:48.982 [2024-07-22 20:47:00.977083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.982 [2024-07-22 20:47:00.977163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.982 [2024-07-22 20:47:00.977182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.982 [2024-07-22 20:47:00.977190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.982 [2024-07-22 20:47:00.977196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.982 [2024-07-22 20:47:00.977218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.982 qpair failed and we were unable to recover it. 00:39:48.982 [2024-07-22 20:47:00.987075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.982 [2024-07-22 20:47:00.987150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.982 [2024-07-22 20:47:00.987165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.982 [2024-07-22 20:47:00.987173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.982 [2024-07-22 20:47:00.987178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.982 [2024-07-22 20:47:00.987193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.982 qpair failed and we were unable to recover it. 00:39:48.982 [2024-07-22 20:47:00.997078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:48.982 [2024-07-22 20:47:00.997152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:48.982 [2024-07-22 20:47:00.997168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:48.982 [2024-07-22 20:47:00.997175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:48.982 [2024-07-22 20:47:00.997181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:48.982 [2024-07-22 20:47:00.997196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:48.982 qpair failed and we were unable to recover it. 00:39:49.245 [2024-07-22 20:47:01.007175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.245 [2024-07-22 20:47:01.007273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.245 [2024-07-22 20:47:01.007290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.245 [2024-07-22 20:47:01.007298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.245 [2024-07-22 20:47:01.007304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.245 [2024-07-22 20:47:01.007320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.245 qpair failed and we were unable to recover it. 00:39:49.245 [2024-07-22 20:47:01.017206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.245 [2024-07-22 20:47:01.017290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.245 [2024-07-22 20:47:01.017306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.245 [2024-07-22 20:47:01.017314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.245 [2024-07-22 20:47:01.017319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.245 [2024-07-22 20:47:01.017338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.245 qpair failed and we were unable to recover it. 00:39:49.245 [2024-07-22 20:47:01.027216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.245 [2024-07-22 20:47:01.027290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.245 [2024-07-22 20:47:01.027306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.245 [2024-07-22 20:47:01.027314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.245 [2024-07-22 20:47:01.027319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.245 [2024-07-22 20:47:01.027334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.245 qpair failed and we were unable to recover it. 00:39:49.245 [2024-07-22 20:47:01.037344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.245 [2024-07-22 20:47:01.037444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.245 [2024-07-22 20:47:01.037460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.245 [2024-07-22 20:47:01.037468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.245 [2024-07-22 20:47:01.037474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.245 [2024-07-22 20:47:01.037489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.245 qpair failed and we were unable to recover it. 00:39:49.245 [2024-07-22 20:47:01.047296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.245 [2024-07-22 20:47:01.047380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.245 [2024-07-22 20:47:01.047396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.245 [2024-07-22 20:47:01.047404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.245 [2024-07-22 20:47:01.047410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.245 [2024-07-22 20:47:01.047426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.245 qpair failed and we were unable to recover it. 00:39:49.245 [2024-07-22 20:47:01.057310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.245 [2024-07-22 20:47:01.057403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.245 [2024-07-22 20:47:01.057419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.245 [2024-07-22 20:47:01.057427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.245 [2024-07-22 20:47:01.057433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.245 [2024-07-22 20:47:01.057448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.245 qpair failed and we were unable to recover it. 00:39:49.245 [2024-07-22 20:47:01.067405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.245 [2024-07-22 20:47:01.067505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.245 [2024-07-22 20:47:01.067524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.245 [2024-07-22 20:47:01.067532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.245 [2024-07-22 20:47:01.067538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.245 [2024-07-22 20:47:01.067553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.246 qpair failed and we were unable to recover it. 00:39:49.246 [2024-07-22 20:47:01.077373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.246 [2024-07-22 20:47:01.077450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.246 [2024-07-22 20:47:01.077466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.246 [2024-07-22 20:47:01.077474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.246 [2024-07-22 20:47:01.077479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.246 [2024-07-22 20:47:01.077495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.246 qpair failed and we were unable to recover it. 00:39:49.246 [2024-07-22 20:47:01.087401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.246 [2024-07-22 20:47:01.087482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.246 [2024-07-22 20:47:01.087498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.246 [2024-07-22 20:47:01.087506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.246 [2024-07-22 20:47:01.087512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.246 [2024-07-22 20:47:01.087527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.246 qpair failed and we were unable to recover it. 00:39:49.246 [2024-07-22 20:47:01.097435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.246 [2024-07-22 20:47:01.097519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.246 [2024-07-22 20:47:01.097535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.246 [2024-07-22 20:47:01.097542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.246 [2024-07-22 20:47:01.097549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.246 [2024-07-22 20:47:01.097564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.246 qpair failed and we were unable to recover it. 00:39:49.246 [2024-07-22 20:47:01.107464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.246 [2024-07-22 20:47:01.107545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.246 [2024-07-22 20:47:01.107565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.246 [2024-07-22 20:47:01.107573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.246 [2024-07-22 20:47:01.107581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.246 [2024-07-22 20:47:01.107596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.246 qpair failed and we were unable to recover it. 00:39:49.246 [2024-07-22 20:47:01.117497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.246 [2024-07-22 20:47:01.117581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.246 [2024-07-22 20:47:01.117596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.246 [2024-07-22 20:47:01.117604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.246 [2024-07-22 20:47:01.117610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.246 [2024-07-22 20:47:01.117626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.246 qpair failed and we were unable to recover it. 00:39:49.246 [2024-07-22 20:47:01.127497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.246 [2024-07-22 20:47:01.127593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.246 [2024-07-22 20:47:01.127608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.246 [2024-07-22 20:47:01.127616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.246 [2024-07-22 20:47:01.127622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.246 [2024-07-22 20:47:01.127637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.246 qpair failed and we were unable to recover it. 00:39:49.246 [2024-07-22 20:47:01.137535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.246 [2024-07-22 20:47:01.137614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.246 [2024-07-22 20:47:01.137630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.246 [2024-07-22 20:47:01.137638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.246 [2024-07-22 20:47:01.137643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.246 [2024-07-22 20:47:01.137658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.246 qpair failed and we were unable to recover it. 00:39:49.246 [2024-07-22 20:47:01.147564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.246 [2024-07-22 20:47:01.147691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.246 [2024-07-22 20:47:01.147708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.246 [2024-07-22 20:47:01.147716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.246 [2024-07-22 20:47:01.147722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.246 [2024-07-22 20:47:01.147737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.246 qpair failed and we were unable to recover it. 00:39:49.246 [2024-07-22 20:47:01.157581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.246 [2024-07-22 20:47:01.157664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.246 [2024-07-22 20:47:01.157679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.246 [2024-07-22 20:47:01.157687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.246 [2024-07-22 20:47:01.157693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.246 [2024-07-22 20:47:01.157709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.246 qpair failed and we were unable to recover it. 00:39:49.246 [2024-07-22 20:47:01.167632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.247 [2024-07-22 20:47:01.167729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.247 [2024-07-22 20:47:01.167744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.247 [2024-07-22 20:47:01.167753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.247 [2024-07-22 20:47:01.167759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.247 [2024-07-22 20:47:01.167774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.247 qpair failed and we were unable to recover it. 00:39:49.247 [2024-07-22 20:47:01.177659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.247 [2024-07-22 20:47:01.177743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.247 [2024-07-22 20:47:01.177759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.247 [2024-07-22 20:47:01.177767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.247 [2024-07-22 20:47:01.177773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.247 [2024-07-22 20:47:01.177788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.247 qpair failed and we were unable to recover it. 00:39:49.247 [2024-07-22 20:47:01.187662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.247 [2024-07-22 20:47:01.187736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.247 [2024-07-22 20:47:01.187752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.247 [2024-07-22 20:47:01.187760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.247 [2024-07-22 20:47:01.187766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.247 [2024-07-22 20:47:01.187782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.247 qpair failed and we were unable to recover it. 00:39:49.247 [2024-07-22 20:47:01.197731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.247 [2024-07-22 20:47:01.197803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.247 [2024-07-22 20:47:01.197819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.247 [2024-07-22 20:47:01.197827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.247 [2024-07-22 20:47:01.197836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.247 [2024-07-22 20:47:01.197851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.247 qpair failed and we were unable to recover it. 00:39:49.247 [2024-07-22 20:47:01.207723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.247 [2024-07-22 20:47:01.207794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.247 [2024-07-22 20:47:01.207810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.247 [2024-07-22 20:47:01.207818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.247 [2024-07-22 20:47:01.207824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.247 [2024-07-22 20:47:01.207839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.247 qpair failed and we were unable to recover it. 00:39:49.247 [2024-07-22 20:47:01.217745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.247 [2024-07-22 20:47:01.217827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.247 [2024-07-22 20:47:01.217843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.247 [2024-07-22 20:47:01.217851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.247 [2024-07-22 20:47:01.217857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.247 [2024-07-22 20:47:01.217872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.247 qpair failed and we were unable to recover it. 00:39:49.247 [2024-07-22 20:47:01.227706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.247 [2024-07-22 20:47:01.227788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.247 [2024-07-22 20:47:01.227804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.247 [2024-07-22 20:47:01.227812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.247 [2024-07-22 20:47:01.227817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.247 [2024-07-22 20:47:01.227832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.247 qpair failed and we were unable to recover it. 00:39:49.247 [2024-07-22 20:47:01.237817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.247 [2024-07-22 20:47:01.237890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.247 [2024-07-22 20:47:01.237906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.247 [2024-07-22 20:47:01.237913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.247 [2024-07-22 20:47:01.237919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.247 [2024-07-22 20:47:01.237935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.247 qpair failed and we were unable to recover it. 00:39:49.247 [2024-07-22 20:47:01.247826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.247 [2024-07-22 20:47:01.247900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.247 [2024-07-22 20:47:01.247916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.247 [2024-07-22 20:47:01.247924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.248 [2024-07-22 20:47:01.247930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.248 [2024-07-22 20:47:01.247945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.248 qpair failed and we were unable to recover it. 00:39:49.248 [2024-07-22 20:47:01.257863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.248 [2024-07-22 20:47:01.257986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.248 [2024-07-22 20:47:01.258003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.248 [2024-07-22 20:47:01.258012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.248 [2024-07-22 20:47:01.258018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.248 [2024-07-22 20:47:01.258034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.248 qpair failed and we were unable to recover it. 00:39:49.510 [2024-07-22 20:47:01.267866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.510 [2024-07-22 20:47:01.267949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.510 [2024-07-22 20:47:01.267973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.510 [2024-07-22 20:47:01.267983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.510 [2024-07-22 20:47:01.267989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.510 [2024-07-22 20:47:01.268009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.510 qpair failed and we were unable to recover it. 00:39:49.510 [2024-07-22 20:47:01.277914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.510 [2024-07-22 20:47:01.277996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.510 [2024-07-22 20:47:01.278019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.510 [2024-07-22 20:47:01.278028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.510 [2024-07-22 20:47:01.278035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.510 [2024-07-22 20:47:01.278055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.510 qpair failed and we were unable to recover it. 00:39:49.510 [2024-07-22 20:47:01.287953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.510 [2024-07-22 20:47:01.288048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.510 [2024-07-22 20:47:01.288071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.510 [2024-07-22 20:47:01.288083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.510 [2024-07-22 20:47:01.288090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.510 [2024-07-22 20:47:01.288110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.510 qpair failed and we were unable to recover it. 00:39:49.510 [2024-07-22 20:47:01.297960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.510 [2024-07-22 20:47:01.298052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.510 [2024-07-22 20:47:01.298070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.510 [2024-07-22 20:47:01.298078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.510 [2024-07-22 20:47:01.298084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.510 [2024-07-22 20:47:01.298101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.510 qpair failed and we were unable to recover it. 00:39:49.510 [2024-07-22 20:47:01.308015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.510 [2024-07-22 20:47:01.308106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.510 [2024-07-22 20:47:01.308122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.510 [2024-07-22 20:47:01.308130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.510 [2024-07-22 20:47:01.308136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.510 [2024-07-22 20:47:01.308151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.510 qpair failed and we were unable to recover it. 00:39:49.510 [2024-07-22 20:47:01.318064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.510 [2024-07-22 20:47:01.318142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.510 [2024-07-22 20:47:01.318158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.510 [2024-07-22 20:47:01.318166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.510 [2024-07-22 20:47:01.318172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.318188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.328055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.328129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.328145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.328152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.328158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.328173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.338037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.338115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.338133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.338140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.338146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.338162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.348133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.348251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.348268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.348275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.348281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.348297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.358366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.358481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.358496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.358504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.358510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.358530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.368209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.368284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.368300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.368308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.368314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.368329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.378313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.378406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.378425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.378432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.378439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.378455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.388268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.388349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.388365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.388373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.388379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.388394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.398292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.398365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.398381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.398389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.398394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.398409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.408300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.408371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.408387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.408395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.408401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.408416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.418361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.418440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.418456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.418464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.418470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.418487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.428367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.428447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.428463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.428470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.428476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.428491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.438398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.438475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.438491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.438498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.438504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.438519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.511 [2024-07-22 20:47:01.448420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.511 [2024-07-22 20:47:01.448507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.511 [2024-07-22 20:47:01.448523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.511 [2024-07-22 20:47:01.448531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.511 [2024-07-22 20:47:01.448536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.511 [2024-07-22 20:47:01.448551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.511 qpair failed and we were unable to recover it. 00:39:49.512 [2024-07-22 20:47:01.458446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.512 [2024-07-22 20:47:01.458522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.512 [2024-07-22 20:47:01.458539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.512 [2024-07-22 20:47:01.458547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.512 [2024-07-22 20:47:01.458553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.512 [2024-07-22 20:47:01.458569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.512 qpair failed and we were unable to recover it. 00:39:49.512 [2024-07-22 20:47:01.468485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.512 [2024-07-22 20:47:01.468566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.512 [2024-07-22 20:47:01.468585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.512 [2024-07-22 20:47:01.468593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.512 [2024-07-22 20:47:01.468599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.512 [2024-07-22 20:47:01.468615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.512 qpair failed and we were unable to recover it. 00:39:49.512 [2024-07-22 20:47:01.478579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.512 [2024-07-22 20:47:01.478685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.512 [2024-07-22 20:47:01.478701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.512 [2024-07-22 20:47:01.478710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.512 [2024-07-22 20:47:01.478716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.512 [2024-07-22 20:47:01.478731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.512 qpair failed and we were unable to recover it. 00:39:49.512 [2024-07-22 20:47:01.488515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.512 [2024-07-22 20:47:01.488607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.512 [2024-07-22 20:47:01.488624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.512 [2024-07-22 20:47:01.488632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.512 [2024-07-22 20:47:01.488639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.512 [2024-07-22 20:47:01.488655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.512 qpair failed and we were unable to recover it. 00:39:49.512 [2024-07-22 20:47:01.498557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.512 [2024-07-22 20:47:01.498634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.512 [2024-07-22 20:47:01.498650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.512 [2024-07-22 20:47:01.498659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.512 [2024-07-22 20:47:01.498665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.512 [2024-07-22 20:47:01.498681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.512 qpair failed and we were unable to recover it. 00:39:49.512 [2024-07-22 20:47:01.508547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.512 [2024-07-22 20:47:01.508646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.512 [2024-07-22 20:47:01.508663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.512 [2024-07-22 20:47:01.508672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.512 [2024-07-22 20:47:01.508682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.512 [2024-07-22 20:47:01.508698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.512 qpair failed and we were unable to recover it. 00:39:49.512 [2024-07-22 20:47:01.518647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.512 [2024-07-22 20:47:01.518724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.512 [2024-07-22 20:47:01.518740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.512 [2024-07-22 20:47:01.518749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.512 [2024-07-22 20:47:01.518756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.512 [2024-07-22 20:47:01.518772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.512 qpair failed and we were unable to recover it. 00:39:49.512 [2024-07-22 20:47:01.528647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.512 [2024-07-22 20:47:01.528731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.512 [2024-07-22 20:47:01.528748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.512 [2024-07-22 20:47:01.528757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.512 [2024-07-22 20:47:01.528763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.512 [2024-07-22 20:47:01.528779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.512 qpair failed and we were unable to recover it. 00:39:49.774 [2024-07-22 20:47:01.538649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.774 [2024-07-22 20:47:01.538734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.774 [2024-07-22 20:47:01.538750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.774 [2024-07-22 20:47:01.538759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.774 [2024-07-22 20:47:01.538765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.774 [2024-07-22 20:47:01.538781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.774 qpair failed and we were unable to recover it. 00:39:49.774 [2024-07-22 20:47:01.548875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.774 [2024-07-22 20:47:01.548948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.774 [2024-07-22 20:47:01.548964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.774 [2024-07-22 20:47:01.548973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.774 [2024-07-22 20:47:01.548979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.774 [2024-07-22 20:47:01.548995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.774 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.558711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.558798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.558821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.558832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.558839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.558863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.568943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.569089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.569112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.569123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.569131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.569151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.578791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.578869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.578887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.578897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.578905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.578923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.588793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.588868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.588885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.588893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.588900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.588917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.598835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.598923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.598939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.598948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.598957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.598973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.608864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.608952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.608969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.608977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.608984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.609000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.618846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.618925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.618942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.618958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.618964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.618981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.628852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.628935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.628952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.628960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.628967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.628983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.638938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.639027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.639050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.639061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.639068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.639089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.648955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.649033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.649051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.649060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.649067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.649084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.658805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.658873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.658889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.658898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.658905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.658921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.669051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.669129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.669145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.669154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.669161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.669177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.679049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.775 [2024-07-22 20:47:01.679121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.775 [2024-07-22 20:47:01.679137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.775 [2024-07-22 20:47:01.679146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.775 [2024-07-22 20:47:01.679164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.775 [2024-07-22 20:47:01.679181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.775 qpair failed and we were unable to recover it. 00:39:49.775 [2024-07-22 20:47:01.689144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.776 [2024-07-22 20:47:01.689221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.776 [2024-07-22 20:47:01.689238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.776 [2024-07-22 20:47:01.689249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.776 [2024-07-22 20:47:01.689256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.776 [2024-07-22 20:47:01.689272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.776 qpair failed and we were unable to recover it. 00:39:49.776 [2024-07-22 20:47:01.698908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.776 [2024-07-22 20:47:01.698975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.776 [2024-07-22 20:47:01.698991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.776 [2024-07-22 20:47:01.699000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.776 [2024-07-22 20:47:01.699007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.776 [2024-07-22 20:47:01.699023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.776 qpair failed and we were unable to recover it. 00:39:49.776 [2024-07-22 20:47:01.709075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.776 [2024-07-22 20:47:01.709193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.776 [2024-07-22 20:47:01.709214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.776 [2024-07-22 20:47:01.709222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.776 [2024-07-22 20:47:01.709229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.776 [2024-07-22 20:47:01.709245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.776 qpair failed and we were unable to recover it. 00:39:49.776 [2024-07-22 20:47:01.718984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.776 [2024-07-22 20:47:01.719047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.776 [2024-07-22 20:47:01.719063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.776 [2024-07-22 20:47:01.719071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.776 [2024-07-22 20:47:01.719078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.776 [2024-07-22 20:47:01.719094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.776 qpair failed and we were unable to recover it. 00:39:49.776 [2024-07-22 20:47:01.729223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.776 [2024-07-22 20:47:01.729295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.776 [2024-07-22 20:47:01.729312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.776 [2024-07-22 20:47:01.729322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.776 [2024-07-22 20:47:01.729329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.776 [2024-07-22 20:47:01.729345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.776 qpair failed and we were unable to recover it. 00:39:49.776 [2024-07-22 20:47:01.739121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.776 [2024-07-22 20:47:01.739192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.776 [2024-07-22 20:47:01.739213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.776 [2024-07-22 20:47:01.739222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.776 [2024-07-22 20:47:01.739229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.776 [2024-07-22 20:47:01.739245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.776 qpair failed and we were unable to recover it. 00:39:49.776 [2024-07-22 20:47:01.749233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.776 [2024-07-22 20:47:01.749316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.776 [2024-07-22 20:47:01.749332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.776 [2024-07-22 20:47:01.749340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.776 [2024-07-22 20:47:01.749347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.776 [2024-07-22 20:47:01.749363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.776 qpair failed and we were unable to recover it. 00:39:49.776 [2024-07-22 20:47:01.759061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.776 [2024-07-22 20:47:01.759128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.776 [2024-07-22 20:47:01.759144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.776 [2024-07-22 20:47:01.759153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.776 [2024-07-22 20:47:01.759159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.776 [2024-07-22 20:47:01.759175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.776 qpair failed and we were unable to recover it. 00:39:49.776 [2024-07-22 20:47:01.769214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.776 [2024-07-22 20:47:01.769288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.776 [2024-07-22 20:47:01.769305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.776 [2024-07-22 20:47:01.769313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.776 [2024-07-22 20:47:01.769320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.776 [2024-07-22 20:47:01.769336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.776 qpair failed and we were unable to recover it. 00:39:49.776 [2024-07-22 20:47:01.779127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.776 [2024-07-22 20:47:01.779225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.776 [2024-07-22 20:47:01.779245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.776 [2024-07-22 20:47:01.779253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.776 [2024-07-22 20:47:01.779260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.776 [2024-07-22 20:47:01.779276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.776 qpair failed and we were unable to recover it. 00:39:49.776 [2024-07-22 20:47:01.789350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:49.776 [2024-07-22 20:47:01.789424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:49.776 [2024-07-22 20:47:01.789440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:49.776 [2024-07-22 20:47:01.789448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:49.776 [2024-07-22 20:47:01.789455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:49.776 [2024-07-22 20:47:01.789472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:49.776 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.799189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.799286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.799302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.799309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.799315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.799331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.809401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.809479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.809495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.809503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.809509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.809525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.819245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.819316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.819332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.819340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.819346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.819364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.829614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.829707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.829724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.829732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.829738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.829753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.839275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.839338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.839354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.839362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.839368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.839386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.849503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.849574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.849589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.849597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.849603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.849619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.859360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.859494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.859510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.859519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.859525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.859540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.869586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.869680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.869699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.869707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.869713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.869729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.879418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.879487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.879503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.879511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.879517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.879533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.889603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.889681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.889697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.889705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.889712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.889727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.899446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.899515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.899531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.899539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.899545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.899560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.909658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.909731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.039 [2024-07-22 20:47:01.909746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.039 [2024-07-22 20:47:01.909754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.039 [2024-07-22 20:47:01.909760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.039 [2024-07-22 20:47:01.909778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.039 qpair failed and we were unable to recover it. 00:39:50.039 [2024-07-22 20:47:01.919517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.039 [2024-07-22 20:47:01.919585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:01.919600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:01.919608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:01.919614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:01.919630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:01.929707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:01.929853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:01.929869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:01.929877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:01.929883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:01.929898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:01.939560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:01.939656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:01.939672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:01.939680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:01.939687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:01.939703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:01.949589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:01.949660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:01.949676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:01.949683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:01.949689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:01.949706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:01.959627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:01.959701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:01.959716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:01.959724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:01.959730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:01.959746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:01.969825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:01.969899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:01.969915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:01.969923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:01.969929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:01.969945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:01.979663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:01.979768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:01.979785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:01.979793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:01.979800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:01.979816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:01.989706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:01.989775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:01.989790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:01.989798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:01.989804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:01.989820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:01.999754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:01.999821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:01.999836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:01.999844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:01.999853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:01.999869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:02.009915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:02.009987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:02.010003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:02.010011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:02.010017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:02.010032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:02.019769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:02.019839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:02.019854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:02.019862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:02.019868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:02.019884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:02.029709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:02.029848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:02.029864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:02.029872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:02.029878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:02.029894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:02.039822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:02.039899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.040 [2024-07-22 20:47:02.039914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.040 [2024-07-22 20:47:02.039923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.040 [2024-07-22 20:47:02.039929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.040 [2024-07-22 20:47:02.039945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.040 qpair failed and we were unable to recover it. 00:39:50.040 [2024-07-22 20:47:02.050023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.040 [2024-07-22 20:47:02.050093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.041 [2024-07-22 20:47:02.050109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.041 [2024-07-22 20:47:02.050117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.041 [2024-07-22 20:47:02.050123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.041 [2024-07-22 20:47:02.050139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.041 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.059914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.059981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.059996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.060004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.060010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.304 [2024-07-22 20:47:02.060026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.304 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.069920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.070001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.070016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.070024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.070031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.304 [2024-07-22 20:47:02.070047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.304 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.079963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.080041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.080057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.080065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.080072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.304 [2024-07-22 20:47:02.080087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.304 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.090144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.090243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.090260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.090271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.090277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.304 [2024-07-22 20:47:02.090293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.304 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.099996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.100062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.100078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.100086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.100092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.304 [2024-07-22 20:47:02.100107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.304 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.109996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.110066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.110082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.110091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.110097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.304 [2024-07-22 20:47:02.110112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.304 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.119958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.120028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.120043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.120051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.120057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.304 [2024-07-22 20:47:02.120072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.304 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.130246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.130375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.130391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.130400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.130412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.304 [2024-07-22 20:47:02.130427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.304 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.140155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.140263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.140280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.140288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.140294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.304 [2024-07-22 20:47:02.140310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.304 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.150111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.150212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.150228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.150237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.150244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.304 [2024-07-22 20:47:02.150259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.304 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.160195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.160312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.160328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.160336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.160343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.304 [2024-07-22 20:47:02.160358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.304 qpair failed and we were unable to recover it. 00:39:50.304 [2024-07-22 20:47:02.170372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.304 [2024-07-22 20:47:02.170448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.304 [2024-07-22 20:47:02.170464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.304 [2024-07-22 20:47:02.170471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.304 [2024-07-22 20:47:02.170478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.170494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.180095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.180163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.180179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.180189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.180196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.180218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.190226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.190317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.190334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.190342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.190349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.190364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.200331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.200399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.200415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.200422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.200428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.200443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.210459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.210534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.210550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.210558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.210564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.210580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.220314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.220384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.220401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.220409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.220415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.220431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.230327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.230396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.230412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.230419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.230425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.230441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.240367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.240435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.240451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.240459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.240464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.240480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.250614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.250683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.250699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.250707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.250713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.250729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.260377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.260453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.260468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.260476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.260482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.260498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.270459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.270525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.270543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.270552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.270558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.270574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.280478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.280544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.280560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.280568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.280574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.280589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.290697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.290786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.290802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.290811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.290817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.290832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.305 [2024-07-22 20:47:02.300503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.305 [2024-07-22 20:47:02.300571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.305 [2024-07-22 20:47:02.300587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.305 [2024-07-22 20:47:02.300595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.305 [2024-07-22 20:47:02.300601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.305 [2024-07-22 20:47:02.300616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.305 qpair failed and we were unable to recover it. 00:39:50.306 [2024-07-22 20:47:02.310546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.306 [2024-07-22 20:47:02.310619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.306 [2024-07-22 20:47:02.310635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.306 [2024-07-22 20:47:02.310643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.306 [2024-07-22 20:47:02.310649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.306 [2024-07-22 20:47:02.310668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.306 qpair failed and we were unable to recover it. 00:39:50.306 [2024-07-22 20:47:02.320630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.306 [2024-07-22 20:47:02.320697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.306 [2024-07-22 20:47:02.320713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.306 [2024-07-22 20:47:02.320722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.306 [2024-07-22 20:47:02.320728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.306 [2024-07-22 20:47:02.320744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.306 qpair failed and we were unable to recover it. 00:39:50.568 [2024-07-22 20:47:02.330860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.568 [2024-07-22 20:47:02.330927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.568 [2024-07-22 20:47:02.330942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.568 [2024-07-22 20:47:02.330951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.568 [2024-07-22 20:47:02.330957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.568 [2024-07-22 20:47:02.330972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.568 qpair failed and we were unable to recover it. 00:39:50.568 [2024-07-22 20:47:02.340642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.568 [2024-07-22 20:47:02.340711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.568 [2024-07-22 20:47:02.340727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.568 [2024-07-22 20:47:02.340734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.568 [2024-07-22 20:47:02.340741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.568 [2024-07-22 20:47:02.340756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.568 qpair failed and we were unable to recover it. 00:39:50.568 [2024-07-22 20:47:02.350640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.568 [2024-07-22 20:47:02.350741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.568 [2024-07-22 20:47:02.350758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.568 [2024-07-22 20:47:02.350766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.568 [2024-07-22 20:47:02.350772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.568 [2024-07-22 20:47:02.350787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.568 qpair failed and we were unable to recover it. 00:39:50.568 [2024-07-22 20:47:02.360727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.568 [2024-07-22 20:47:02.360804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.568 [2024-07-22 20:47:02.360822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.568 [2024-07-22 20:47:02.360830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.568 [2024-07-22 20:47:02.360836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.568 [2024-07-22 20:47:02.360851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.568 qpair failed and we were unable to recover it. 00:39:50.568 [2024-07-22 20:47:02.370908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.568 [2024-07-22 20:47:02.370989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.568 [2024-07-22 20:47:02.371012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.568 [2024-07-22 20:47:02.371022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.568 [2024-07-22 20:47:02.371029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.568 [2024-07-22 20:47:02.371049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.568 qpair failed and we were unable to recover it. 00:39:50.568 [2024-07-22 20:47:02.380666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.568 [2024-07-22 20:47:02.380741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.568 [2024-07-22 20:47:02.380758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.568 [2024-07-22 20:47:02.380766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.568 [2024-07-22 20:47:02.380773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.568 [2024-07-22 20:47:02.380790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.568 qpair failed and we were unable to recover it. 00:39:50.568 [2024-07-22 20:47:02.390778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.568 [2024-07-22 20:47:02.390847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.568 [2024-07-22 20:47:02.390869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.568 [2024-07-22 20:47:02.390877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.568 [2024-07-22 20:47:02.390883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.568 [2024-07-22 20:47:02.390900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.568 qpair failed and we were unable to recover it. 00:39:50.568 [2024-07-22 20:47:02.400827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.568 [2024-07-22 20:47:02.400900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.568 [2024-07-22 20:47:02.400916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.568 [2024-07-22 20:47:02.400924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.568 [2024-07-22 20:47:02.400933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.568 [2024-07-22 20:47:02.400950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.568 qpair failed and we were unable to recover it. 00:39:50.568 [2024-07-22 20:47:02.411048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.568 [2024-07-22 20:47:02.411135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.568 [2024-07-22 20:47:02.411151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.411160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.411166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.411181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.420859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.420926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.420942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.420950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.420956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.420972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.430876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.430945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.430960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.430968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.430974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.430991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.440908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.440975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.440991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.440999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.441005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.441020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.451117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.451192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.451215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.451224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.451230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.451246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.461013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.461125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.461141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.461149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.461156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.461171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.470979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.471046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.471062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.471071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.471076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.471092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.481013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.481079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.481095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.481103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.481109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.481125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.491249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.491370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.491386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.491396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.491403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.491419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.501076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.501146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.501162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.501170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.501176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.501191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.511090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.511206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.511223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.511231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.511238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.511254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.521129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.521198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.521218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.521226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.521232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.521249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.531315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.531388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.531403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.531412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.569 [2024-07-22 20:47:02.531418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.569 [2024-07-22 20:47:02.531434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.569 qpair failed and we were unable to recover it. 00:39:50.569 [2024-07-22 20:47:02.541168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.569 [2024-07-22 20:47:02.541247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.569 [2024-07-22 20:47:02.541262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.569 [2024-07-22 20:47:02.541270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.570 [2024-07-22 20:47:02.541277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.570 [2024-07-22 20:47:02.541293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.570 qpair failed and we were unable to recover it. 00:39:50.570 [2024-07-22 20:47:02.551212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.570 [2024-07-22 20:47:02.551286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.570 [2024-07-22 20:47:02.551302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.570 [2024-07-22 20:47:02.551310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.570 [2024-07-22 20:47:02.551316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.570 [2024-07-22 20:47:02.551332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.570 qpair failed and we were unable to recover it. 00:39:50.570 [2024-07-22 20:47:02.561308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.570 [2024-07-22 20:47:02.561377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.570 [2024-07-22 20:47:02.561393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.570 [2024-07-22 20:47:02.561401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.570 [2024-07-22 20:47:02.561407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.570 [2024-07-22 20:47:02.561423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.570 qpair failed and we were unable to recover it. 00:39:50.570 [2024-07-22 20:47:02.571462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.570 [2024-07-22 20:47:02.571533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.570 [2024-07-22 20:47:02.571549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.570 [2024-07-22 20:47:02.571558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.570 [2024-07-22 20:47:02.571564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.570 [2024-07-22 20:47:02.571580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.570 qpair failed and we were unable to recover it. 00:39:50.570 [2024-07-22 20:47:02.581260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.570 [2024-07-22 20:47:02.581328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.570 [2024-07-22 20:47:02.581344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.570 [2024-07-22 20:47:02.581354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.570 [2024-07-22 20:47:02.581360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.570 [2024-07-22 20:47:02.581376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.570 qpair failed and we were unable to recover it. 00:39:50.832 [2024-07-22 20:47:02.591304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.832 [2024-07-22 20:47:02.591372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.832 [2024-07-22 20:47:02.591388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.832 [2024-07-22 20:47:02.591396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.832 [2024-07-22 20:47:02.591402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.832 [2024-07-22 20:47:02.591417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.832 qpair failed and we were unable to recover it. 00:39:50.832 [2024-07-22 20:47:02.601357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.832 [2024-07-22 20:47:02.601424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.832 [2024-07-22 20:47:02.601439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.832 [2024-07-22 20:47:02.601447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.601453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.601469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.611515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.611613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.611629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.611637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.611643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.611659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.621374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.621453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.621469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.621477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.621483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.621498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.631452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.631521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.631537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.631545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.631551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.631567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.641470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.641583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.641599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.641608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.641614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.641644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.651676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.651769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.651786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.651794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.651800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.651815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.661501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.661571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.661587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.661595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.661601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.661617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.671585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.671658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.671677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.671685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.671691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.671711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.681567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.681636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.681652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.681660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.681666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.681682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.691799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.691874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.691890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.691898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.691904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.691920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.701601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.701672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.701689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.701697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.701704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.701720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.711707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.711781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.711797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.711805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.711811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.711831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.721623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.721692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.721708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.721716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.721722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.833 [2024-07-22 20:47:02.721739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.833 qpair failed and we were unable to recover it. 00:39:50.833 [2024-07-22 20:47:02.731874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.833 [2024-07-22 20:47:02.731955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.833 [2024-07-22 20:47:02.731971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.833 [2024-07-22 20:47:02.731980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.833 [2024-07-22 20:47:02.731986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.732002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.741699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.741776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.741799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.741809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.741816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.741836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.751729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.751806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.751830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.751841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.751847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.751867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.761784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.761858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.761878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.761887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.761894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.761910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.771972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.772063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.772079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.772088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.772095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.772111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.781810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.781882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.781898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.781907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.781913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.781929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.791845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.791914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.791930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.791939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.791945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.791961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.801873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.801941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.801957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.801965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.801974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.801990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.812090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.812159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.812175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.812183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.812189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.812210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.821902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.821971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.821988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.821996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.822002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.822018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.831928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.832000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.832016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.832024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.832030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.832047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.841987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.842055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.842072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.842080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.842086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.842101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:50.834 [2024-07-22 20:47:02.852185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:50.834 [2024-07-22 20:47:02.852266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:50.834 [2024-07-22 20:47:02.852283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:50.834 [2024-07-22 20:47:02.852291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:50.834 [2024-07-22 20:47:02.852297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:50.834 [2024-07-22 20:47:02.852313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:50.834 qpair failed and we were unable to recover it. 00:39:51.096 [2024-07-22 20:47:02.862021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.096 [2024-07-22 20:47:02.862093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.096 [2024-07-22 20:47:02.862108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.096 [2024-07-22 20:47:02.862117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.096 [2024-07-22 20:47:02.862123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.096 [2024-07-22 20:47:02.862138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.096 qpair failed and we were unable to recover it. 00:39:51.096 [2024-07-22 20:47:02.871956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.096 [2024-07-22 20:47:02.872035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.096 [2024-07-22 20:47:02.872052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.096 [2024-07-22 20:47:02.872059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.096 [2024-07-22 20:47:02.872066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.096 [2024-07-22 20:47:02.872082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.096 qpair failed and we were unable to recover it. 00:39:51.096 [2024-07-22 20:47:02.882087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.096 [2024-07-22 20:47:02.882156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.096 [2024-07-22 20:47:02.882172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.096 [2024-07-22 20:47:02.882180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.096 [2024-07-22 20:47:02.882186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.096 [2024-07-22 20:47:02.882211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.096 qpair failed and we were unable to recover it. 00:39:51.096 [2024-07-22 20:47:02.892348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.096 [2024-07-22 20:47:02.892416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.096 [2024-07-22 20:47:02.892432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.096 [2024-07-22 20:47:02.892442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.096 [2024-07-22 20:47:02.892450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.096 [2024-07-22 20:47:02.892467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.096 qpair failed and we were unable to recover it. 00:39:51.096 [2024-07-22 20:47:02.902127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.096 [2024-07-22 20:47:02.902197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.096 [2024-07-22 20:47:02.902218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.096 [2024-07-22 20:47:02.902231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.096 [2024-07-22 20:47:02.902237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.096 [2024-07-22 20:47:02.902253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.096 qpair failed and we were unable to recover it. 00:39:51.096 [2024-07-22 20:47:02.912161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.096 [2024-07-22 20:47:02.912234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.096 [2024-07-22 20:47:02.912250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.096 [2024-07-22 20:47:02.912258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.096 [2024-07-22 20:47:02.912264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.096 [2024-07-22 20:47:02.912281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.096 qpair failed and we were unable to recover it. 00:39:51.096 [2024-07-22 20:47:02.922204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.096 [2024-07-22 20:47:02.922271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.096 [2024-07-22 20:47:02.922288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.096 [2024-07-22 20:47:02.922295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.096 [2024-07-22 20:47:02.922302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.096 [2024-07-22 20:47:02.922317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.096 qpair failed and we were unable to recover it. 00:39:51.096 [2024-07-22 20:47:02.932431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.096 [2024-07-22 20:47:02.932502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.096 [2024-07-22 20:47:02.932518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.096 [2024-07-22 20:47:02.932526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.096 [2024-07-22 20:47:02.932532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.096 [2024-07-22 20:47:02.932548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.096 qpair failed and we were unable to recover it. 00:39:51.096 [2024-07-22 20:47:02.942248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.096 [2024-07-22 20:47:02.942318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.096 [2024-07-22 20:47:02.942335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.096 [2024-07-22 20:47:02.942343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.096 [2024-07-22 20:47:02.942349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.096 [2024-07-22 20:47:02.942365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:02.952267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:02.952336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:02.952352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:02.952360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:02.952366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:02.952382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:02.962222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:02.962287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:02.962303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:02.962311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:02.962317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:02.962333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:02.972517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:02.972607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:02.972623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:02.972631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:02.972637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:02.972653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:02.982350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:02.982421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:02.982437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:02.982450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:02.982456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:02.982472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:02.992413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:02.992504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:02.992520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:02.992529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:02.992536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:02.992551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:03.002440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:03.002508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:03.002524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:03.002533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:03.002539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:03.002555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:03.012623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:03.012767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:03.012784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:03.012792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:03.012797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:03.012813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:03.022455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:03.022523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:03.022540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:03.022548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:03.022554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:03.022570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:03.032470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:03.032546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:03.032562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:03.032570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:03.032576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:03.032592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:03.042536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:03.042606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:03.042621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:03.042629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:03.042635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:03.042651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:03.052667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:03.052742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:03.052758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:03.052766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:03.052772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:03.052789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:03.062495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:03.062562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:03.062578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:03.062586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:03.062592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:03.062607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:03.072592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.097 [2024-07-22 20:47:03.072673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.097 [2024-07-22 20:47:03.072692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.097 [2024-07-22 20:47:03.072700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.097 [2024-07-22 20:47:03.072707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.097 [2024-07-22 20:47:03.072722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.097 qpair failed and we were unable to recover it. 00:39:51.097 [2024-07-22 20:47:03.082815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.098 [2024-07-22 20:47:03.082883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.098 [2024-07-22 20:47:03.082899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.098 [2024-07-22 20:47:03.082907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.098 [2024-07-22 20:47:03.082913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.098 [2024-07-22 20:47:03.082928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.098 qpair failed and we were unable to recover it. 00:39:51.098 [2024-07-22 20:47:03.092848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.098 [2024-07-22 20:47:03.092918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.098 [2024-07-22 20:47:03.092934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.098 [2024-07-22 20:47:03.092942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.098 [2024-07-22 20:47:03.092948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.098 [2024-07-22 20:47:03.092963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.098 qpair failed and we were unable to recover it. 00:39:51.098 [2024-07-22 20:47:03.102689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.098 [2024-07-22 20:47:03.102756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.098 [2024-07-22 20:47:03.102772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.098 [2024-07-22 20:47:03.102780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.098 [2024-07-22 20:47:03.102786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.098 [2024-07-22 20:47:03.102802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.098 qpair failed and we were unable to recover it. 00:39:51.098 [2024-07-22 20:47:03.112707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.098 [2024-07-22 20:47:03.112780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.098 [2024-07-22 20:47:03.112795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.098 [2024-07-22 20:47:03.112803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.098 [2024-07-22 20:47:03.112809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.098 [2024-07-22 20:47:03.112827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.098 qpair failed and we were unable to recover it. 00:39:51.360 [2024-07-22 20:47:03.122752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.360 [2024-07-22 20:47:03.122843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.360 [2024-07-22 20:47:03.122859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.360 [2024-07-22 20:47:03.122868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.360 [2024-07-22 20:47:03.122874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.360 [2024-07-22 20:47:03.122890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.360 qpair failed and we were unable to recover it. 00:39:51.360 [2024-07-22 20:47:03.133003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.360 [2024-07-22 20:47:03.133119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.360 [2024-07-22 20:47:03.133135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.360 [2024-07-22 20:47:03.133143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.360 [2024-07-22 20:47:03.133149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.360 [2024-07-22 20:47:03.133164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.360 qpair failed and we were unable to recover it. 00:39:51.360 [2024-07-22 20:47:03.142785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.360 [2024-07-22 20:47:03.142852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.360 [2024-07-22 20:47:03.142868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.360 [2024-07-22 20:47:03.142876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.360 [2024-07-22 20:47:03.142882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.360 [2024-07-22 20:47:03.142898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.360 qpair failed and we were unable to recover it. 00:39:51.360 [2024-07-22 20:47:03.152808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.360 [2024-07-22 20:47:03.152879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.360 [2024-07-22 20:47:03.152895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.152903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.152909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.152925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.162857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.162924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.162942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.162951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.162957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.162974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.173060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.173219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.173236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.173244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.173251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.173267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.182892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.182962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.182978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.182986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.182992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.183007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.192918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.192990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.193006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.193014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.193020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.193036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.202943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.203014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.203030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.203038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.203046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.203062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.213133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.213213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.213229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.213237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.213244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.213261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.223010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.223077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.223093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.223101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.223107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.223122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.233049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.233158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.233174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.233182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.233189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.233210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.242975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.243039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.243055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.243063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.243068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.243084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.253280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.253363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.253380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.253387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.253393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.253410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.263112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.263179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.263195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.263210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.263216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.263232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.273151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.273234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.273250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.273258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.273265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.361 [2024-07-22 20:47:03.273281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.361 qpair failed and we were unable to recover it. 00:39:51.361 [2024-07-22 20:47:03.283188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.361 [2024-07-22 20:47:03.283262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.361 [2024-07-22 20:47:03.283279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.361 [2024-07-22 20:47:03.283288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.361 [2024-07-22 20:47:03.283294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.362 [2024-07-22 20:47:03.283312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.362 qpair failed and we were unable to recover it. 00:39:51.362 [2024-07-22 20:47:03.293423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.362 [2024-07-22 20:47:03.293514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.362 [2024-07-22 20:47:03.293529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.362 [2024-07-22 20:47:03.293539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.362 [2024-07-22 20:47:03.293547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.362 [2024-07-22 20:47:03.293563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.362 qpair failed and we were unable to recover it. 00:39:51.362 [2024-07-22 20:47:03.303220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.362 [2024-07-22 20:47:03.303289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.362 [2024-07-22 20:47:03.303305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.362 [2024-07-22 20:47:03.303314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.362 [2024-07-22 20:47:03.303320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.362 [2024-07-22 20:47:03.303336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.362 qpair failed and we were unable to recover it. 00:39:51.362 [2024-07-22 20:47:03.313412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.362 [2024-07-22 20:47:03.313478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.362 [2024-07-22 20:47:03.313495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.362 [2024-07-22 20:47:03.313503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.362 [2024-07-22 20:47:03.313509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.362 [2024-07-22 20:47:03.313524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.362 qpair failed and we were unable to recover it. 00:39:51.362 [2024-07-22 20:47:03.323279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.362 [2024-07-22 20:47:03.323347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.362 [2024-07-22 20:47:03.323363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.362 [2024-07-22 20:47:03.323372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.362 [2024-07-22 20:47:03.323378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.362 [2024-07-22 20:47:03.323393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.362 qpair failed and we were unable to recover it. 00:39:51.362 [2024-07-22 20:47:03.333579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.362 [2024-07-22 20:47:03.333680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.362 [2024-07-22 20:47:03.333696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.362 [2024-07-22 20:47:03.333704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.362 [2024-07-22 20:47:03.333711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.362 [2024-07-22 20:47:03.333726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.362 qpair failed and we were unable to recover it. 00:39:51.362 [2024-07-22 20:47:03.343351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.362 [2024-07-22 20:47:03.343420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.362 [2024-07-22 20:47:03.343436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.362 [2024-07-22 20:47:03.343444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.362 [2024-07-22 20:47:03.343450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.362 [2024-07-22 20:47:03.343466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.362 qpair failed and we were unable to recover it. 00:39:51.362 [2024-07-22 20:47:03.353353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.362 [2024-07-22 20:47:03.353425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.362 [2024-07-22 20:47:03.353440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.362 [2024-07-22 20:47:03.353449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.362 [2024-07-22 20:47:03.353455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.362 [2024-07-22 20:47:03.353471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.362 qpair failed and we were unable to recover it. 00:39:51.362 [2024-07-22 20:47:03.363406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.362 [2024-07-22 20:47:03.363499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.362 [2024-07-22 20:47:03.363515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.362 [2024-07-22 20:47:03.363523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.362 [2024-07-22 20:47:03.363529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.362 [2024-07-22 20:47:03.363545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.362 qpair failed and we were unable to recover it. 00:39:51.362 [2024-07-22 20:47:03.373696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.362 [2024-07-22 20:47:03.373776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.362 [2024-07-22 20:47:03.373792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.362 [2024-07-22 20:47:03.373800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.362 [2024-07-22 20:47:03.373806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.362 [2024-07-22 20:47:03.373822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.362 qpair failed and we were unable to recover it. 00:39:51.624 [2024-07-22 20:47:03.383479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.383547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.383563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.383574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.383580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.383596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.393475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.393543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.393559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.393568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.393574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.393589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.403494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.403608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.403625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.403633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.403639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.403655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.413700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.413794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.413810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.413819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.413829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.413845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.423557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.423629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.423645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.423653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.423659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.423674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.433565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.433633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.433649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.433657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.433663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.433679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.443627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.443698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.443714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.443722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.443728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.443743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.453739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.453807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.453823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.453831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.453837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.453853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.463640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.463706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.463721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.463729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.463736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.463751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.473663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.473732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.473750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.473758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.473764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.473780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.483725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.483793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.483809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.483817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.483823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.483838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.493890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.493996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.625 [2024-07-22 20:47:03.494011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.625 [2024-07-22 20:47:03.494019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.625 [2024-07-22 20:47:03.494025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.625 [2024-07-22 20:47:03.494041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.625 qpair failed and we were unable to recover it. 00:39:51.625 [2024-07-22 20:47:03.503902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.625 [2024-07-22 20:47:03.504014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.504037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.504047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.504054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.504074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.513782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.513851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.513868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.513877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.513883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.513903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.523818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.523912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.523929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.523937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.523944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.523959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.534023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.534091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.534107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.534115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.534121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.534137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.543843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.543910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.543926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.543934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.543940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.543955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.553902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.553970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.553986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.553994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.554000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.554015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.563936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.564002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.564022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.564030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.564036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.564052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.574203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.574274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.574290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.574298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.574304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.574320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.583965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.584065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.584081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.584089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.584095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.584111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.594010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.594102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.594117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.594125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.594132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.594148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.604051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.604122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.604138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.604146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.604152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.604170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.614253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.614333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.614349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.614357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.614363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.614379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.623968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.624037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.624053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.624060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.626 [2024-07-22 20:47:03.624067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.626 [2024-07-22 20:47:03.624082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.626 qpair failed and we were unable to recover it. 00:39:51.626 [2024-07-22 20:47:03.634108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.626 [2024-07-22 20:47:03.634174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.626 [2024-07-22 20:47:03.634190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.626 [2024-07-22 20:47:03.634198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.627 [2024-07-22 20:47:03.634209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.627 [2024-07-22 20:47:03.634224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.627 qpair failed and we were unable to recover it. 00:39:51.627 [2024-07-22 20:47:03.644141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.627 [2024-07-22 20:47:03.644216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.627 [2024-07-22 20:47:03.644232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.627 [2024-07-22 20:47:03.644240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.627 [2024-07-22 20:47:03.644246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.627 [2024-07-22 20:47:03.644263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.627 qpair failed and we were unable to recover it. 00:39:51.889 [2024-07-22 20:47:03.654153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.889 [2024-07-22 20:47:03.654226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.889 [2024-07-22 20:47:03.654242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.889 [2024-07-22 20:47:03.654251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.889 [2024-07-22 20:47:03.654257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.889 [2024-07-22 20:47:03.654273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.889 qpair failed and we were unable to recover it. 00:39:51.889 [2024-07-22 20:47:03.664165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.889 [2024-07-22 20:47:03.664239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.889 [2024-07-22 20:47:03.664255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.889 [2024-07-22 20:47:03.664263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.889 [2024-07-22 20:47:03.664269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.889 [2024-07-22 20:47:03.664285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.889 qpair failed and we were unable to recover it. 00:39:51.889 [2024-07-22 20:47:03.674219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.889 [2024-07-22 20:47:03.674295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.889 [2024-07-22 20:47:03.674315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.889 [2024-07-22 20:47:03.674323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.889 [2024-07-22 20:47:03.674329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.889 [2024-07-22 20:47:03.674345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.889 qpair failed and we were unable to recover it. 00:39:51.889 [2024-07-22 20:47:03.684237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.889 [2024-07-22 20:47:03.684315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.889 [2024-07-22 20:47:03.684331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.889 [2024-07-22 20:47:03.684341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.889 [2024-07-22 20:47:03.684347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.889 [2024-07-22 20:47:03.684365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.889 qpair failed and we were unable to recover it. 00:39:51.889 [2024-07-22 20:47:03.694305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.889 [2024-07-22 20:47:03.694380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.889 [2024-07-22 20:47:03.694396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.889 [2024-07-22 20:47:03.694404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.889 [2024-07-22 20:47:03.694413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.889 [2024-07-22 20:47:03.694429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.889 qpair failed and we were unable to recover it. 00:39:51.889 [2024-07-22 20:47:03.704293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.889 [2024-07-22 20:47:03.704362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.889 [2024-07-22 20:47:03.704378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.889 [2024-07-22 20:47:03.704386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.889 [2024-07-22 20:47:03.704392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.889 [2024-07-22 20:47:03.704408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.889 qpair failed and we were unable to recover it. 00:39:51.889 [2024-07-22 20:47:03.714335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.889 [2024-07-22 20:47:03.714400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.889 [2024-07-22 20:47:03.714415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.889 [2024-07-22 20:47:03.714423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.889 [2024-07-22 20:47:03.714429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.889 [2024-07-22 20:47:03.714446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.889 qpair failed and we were unable to recover it. 00:39:51.889 [2024-07-22 20:47:03.724349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.889 [2024-07-22 20:47:03.724421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.889 [2024-07-22 20:47:03.724437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.889 [2024-07-22 20:47:03.724445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.889 [2024-07-22 20:47:03.724452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.889 [2024-07-22 20:47:03.724469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.889 qpair failed and we were unable to recover it. 00:39:51.889 [2024-07-22 20:47:03.734354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.889 [2024-07-22 20:47:03.734419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.889 [2024-07-22 20:47:03.734435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.889 [2024-07-22 20:47:03.734443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.889 [2024-07-22 20:47:03.734450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.889 [2024-07-22 20:47:03.734465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.889 qpair failed and we were unable to recover it. 00:39:51.889 [2024-07-22 20:47:03.744398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.889 [2024-07-22 20:47:03.744481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.744496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.744504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.744511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.744526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.754420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.754489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.754505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.754512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.754519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.754534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.764550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.764618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.764633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.764641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.764648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.764664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.774476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.774547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.774563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.774571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.774577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.774593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.784516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.784589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.784605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.784615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.784622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.784641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.794518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.794587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.794602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.794610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.794616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.794632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.804589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.804657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.804673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.804681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.804687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.804703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.814530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.814603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.814618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.814626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.814632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.814647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.824636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.824727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.824743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.824751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.824757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.824772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.834623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.834690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.834705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.834713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.834719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.834735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.844682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.844748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.844763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.844771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.844777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.844792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.854710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.854793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.854809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.854817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.854824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.854839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.864713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.864803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.864818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.864827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.890 [2024-07-22 20:47:03.864833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.890 [2024-07-22 20:47:03.864848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.890 qpair failed and we were unable to recover it. 00:39:51.890 [2024-07-22 20:47:03.874783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.890 [2024-07-22 20:47:03.874866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.890 [2024-07-22 20:47:03.874889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.890 [2024-07-22 20:47:03.874902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.891 [2024-07-22 20:47:03.874910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.891 [2024-07-22 20:47:03.874930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.891 qpair failed and we were unable to recover it. 00:39:51.891 [2024-07-22 20:47:03.884826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.891 [2024-07-22 20:47:03.884899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.891 [2024-07-22 20:47:03.884921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.891 [2024-07-22 20:47:03.884931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.891 [2024-07-22 20:47:03.884939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.891 [2024-07-22 20:47:03.884958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.891 qpair failed and we were unable to recover it. 00:39:51.891 [2024-07-22 20:47:03.894798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.891 [2024-07-22 20:47:03.894871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.891 [2024-07-22 20:47:03.894894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.891 [2024-07-22 20:47:03.894904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.891 [2024-07-22 20:47:03.894912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.891 [2024-07-22 20:47:03.894931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.891 qpair failed and we were unable to recover it. 00:39:51.891 [2024-07-22 20:47:03.904793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:51.891 [2024-07-22 20:47:03.904882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:51.891 [2024-07-22 20:47:03.904900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:51.891 [2024-07-22 20:47:03.904908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:51.891 [2024-07-22 20:47:03.904915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:51.891 [2024-07-22 20:47:03.904931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:51.891 qpair failed and we were unable to recover it. 00:39:52.152 [2024-07-22 20:47:03.914910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.152 [2024-07-22 20:47:03.914981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.152 [2024-07-22 20:47:03.914998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.152 [2024-07-22 20:47:03.915006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.152 [2024-07-22 20:47:03.915012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.152 [2024-07-22 20:47:03.915030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.152 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:03.924896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:03.924971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:03.924994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:03.925006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:03.925013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:03.925038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:03.934917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:03.934985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:03.935002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:03.935011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:03.935017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:03.935034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:03.945014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:03.945117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:03.945134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:03.945142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:03.945149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:03.945165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:03.954951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:03.955033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:03.955049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:03.955057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:03.955063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:03.955079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:03.965037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:03.965131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:03.965150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:03.965159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:03.965165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:03.965181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:03.975017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:03.975080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:03.975096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:03.975104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:03.975110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:03.975126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:03.985062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:03.985131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:03.985147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:03.985155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:03.985161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:03.985176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:03.995010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:03.995088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:03.995104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:03.995112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:03.995118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:03.995134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:04.005110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:04.005176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:04.005192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:04.005206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:04.005213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:04.005231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:04.015124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:04.015198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:04.015218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:04.015226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:04.015232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:04.015248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:04.025179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:04.025254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:04.025270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:04.025279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:04.025285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:04.025302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:04.035178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:04.035250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:04.035265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:04.035274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:04.035280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:04.035296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:04.045229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:04.045296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.153 [2024-07-22 20:47:04.045311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.153 [2024-07-22 20:47:04.045319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.153 [2024-07-22 20:47:04.045325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.153 [2024-07-22 20:47:04.045341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.153 qpair failed and we were unable to recover it. 00:39:52.153 [2024-07-22 20:47:04.055247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.153 [2024-07-22 20:47:04.055328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.055346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.055355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.055361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.055377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.154 [2024-07-22 20:47:04.065269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.154 [2024-07-22 20:47:04.065339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.065355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.065362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.065368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.065384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.154 [2024-07-22 20:47:04.075289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.154 [2024-07-22 20:47:04.075358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.075374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.075382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.075388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.075404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.154 [2024-07-22 20:47:04.085354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.154 [2024-07-22 20:47:04.085425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.085441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.085449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.085455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.085472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.154 [2024-07-22 20:47:04.095366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.154 [2024-07-22 20:47:04.095440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.095456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.095464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.095473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.095489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.154 [2024-07-22 20:47:04.105397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.154 [2024-07-22 20:47:04.105467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.105483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.105492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.105498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.105514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.154 [2024-07-22 20:47:04.115420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.154 [2024-07-22 20:47:04.115496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.115512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.115520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.115526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.115543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.154 [2024-07-22 20:47:04.125446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.154 [2024-07-22 20:47:04.125515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.125530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.125538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.125544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.125559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.154 [2024-07-22 20:47:04.135478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.154 [2024-07-22 20:47:04.135554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.135570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.135578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.135584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.135599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.154 [2024-07-22 20:47:04.145496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.154 [2024-07-22 20:47:04.145572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.145587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.145595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.145601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.145617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.154 [2024-07-22 20:47:04.155502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.154 [2024-07-22 20:47:04.155601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.155618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.155626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.155632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.155647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.154 [2024-07-22 20:47:04.165605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.154 [2024-07-22 20:47:04.165673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.154 [2024-07-22 20:47:04.165689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.154 [2024-07-22 20:47:04.165696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.154 [2024-07-22 20:47:04.165702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.154 [2024-07-22 20:47:04.165718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.154 qpair failed and we were unable to recover it. 00:39:52.416 [2024-07-22 20:47:04.175576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.416 [2024-07-22 20:47:04.175642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.416 [2024-07-22 20:47:04.175658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.416 [2024-07-22 20:47:04.175665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.416 [2024-07-22 20:47:04.175671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.416 [2024-07-22 20:47:04.175687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.416 qpair failed and we were unable to recover it. 00:39:52.416 [2024-07-22 20:47:04.185596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.416 [2024-07-22 20:47:04.185666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.416 [2024-07-22 20:47:04.185682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.185696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.185702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.185719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.195613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.195679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.195694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.195703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.195709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.195724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.205665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.205732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.205747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.205755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.205761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.205777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.215697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.215777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.215792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.215800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.215807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.215822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.225693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.225763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.225778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.225787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.225793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.225808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.235738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.235803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.235819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.235827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.235833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.235848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.245761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.245827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.245842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.245851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.245857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.245872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.255828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.255948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.255972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.255982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.255989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.256008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.265814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.265888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.265911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.265921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.265928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.265948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.275838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.275914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.275937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.275950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.275957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.275978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.285796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.285876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.285899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.285909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.285916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.285937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.295870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.295944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.295967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.295977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.295985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.296004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.305917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.305986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.306003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.306012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.417 [2024-07-22 20:47:04.306018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.417 [2024-07-22 20:47:04.306035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.417 qpair failed and we were unable to recover it. 00:39:52.417 [2024-07-22 20:47:04.315936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.417 [2024-07-22 20:47:04.316007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.417 [2024-07-22 20:47:04.316023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.417 [2024-07-22 20:47:04.316031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.316037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.316054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.325984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.326066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.326082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.326090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.326097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.326112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.335996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.336097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.336114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.336122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.336128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.336144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.346013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.346083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.346099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.346106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.346112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.346128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.356078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.356176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.356193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.356206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.356214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.356231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.366087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.366156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.366175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.366183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.366188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.366210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.376101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.376173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.376189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.376197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.376210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.376226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.386146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.386221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.386237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.386245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.386251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.386266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.396156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.396233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.396249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.396258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.396263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.396280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.406179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.406251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.406267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.406275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.406281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.406300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.416213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.416285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.416301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.416309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.416315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.416331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.426225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.426295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.426311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.426320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.426325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.426343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.418 [2024-07-22 20:47:04.436271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.418 [2024-07-22 20:47:04.436338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.418 [2024-07-22 20:47:04.436354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.418 [2024-07-22 20:47:04.436362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.418 [2024-07-22 20:47:04.436368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.418 [2024-07-22 20:47:04.436384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.418 qpair failed and we were unable to recover it. 00:39:52.680 [2024-07-22 20:47:04.446294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.680 [2024-07-22 20:47:04.446363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.680 [2024-07-22 20:47:04.446379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.680 [2024-07-22 20:47:04.446387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.680 [2024-07-22 20:47:04.446393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.680 [2024-07-22 20:47:04.446409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.680 qpair failed and we were unable to recover it. 00:39:52.680 [2024-07-22 20:47:04.456343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.680 [2024-07-22 20:47:04.456412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.680 [2024-07-22 20:47:04.456430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.680 [2024-07-22 20:47:04.456438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.680 [2024-07-22 20:47:04.456444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.680 [2024-07-22 20:47:04.456462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.680 qpair failed and we were unable to recover it. 00:39:52.680 [2024-07-22 20:47:04.466274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.680 [2024-07-22 20:47:04.466344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.680 [2024-07-22 20:47:04.466360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.680 [2024-07-22 20:47:04.466368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.680 [2024-07-22 20:47:04.466374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.680 [2024-07-22 20:47:04.466389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.680 qpair failed and we were unable to recover it. 00:39:52.680 [2024-07-22 20:47:04.476389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.680 [2024-07-22 20:47:04.476471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.680 [2024-07-22 20:47:04.476487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.680 [2024-07-22 20:47:04.476495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.680 [2024-07-22 20:47:04.476502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.680 [2024-07-22 20:47:04.476517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.680 qpair failed and we were unable to recover it. 00:39:52.680 [2024-07-22 20:47:04.486420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.680 [2024-07-22 20:47:04.486541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.680 [2024-07-22 20:47:04.486557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.680 [2024-07-22 20:47:04.486565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.680 [2024-07-22 20:47:04.486571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.680 [2024-07-22 20:47:04.486587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.680 qpair failed and we were unable to recover it. 00:39:52.680 [2024-07-22 20:47:04.496438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.680 [2024-07-22 20:47:04.496503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.680 [2024-07-22 20:47:04.496520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.680 [2024-07-22 20:47:04.496529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.680 [2024-07-22 20:47:04.496538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003a0000 00:39:52.680 [2024-07-22 20:47:04.496557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:52.680 qpair failed and we were unable to recover it. 00:39:52.680 [2024-07-22 20:47:04.506633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.680 [2024-07-22 20:47:04.506776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.680 [2024-07-22 20:47:04.506857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.680 [2024-07-22 20:47:04.506896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.680 [2024-07-22 20:47:04.506925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500038fe80 00:39:52.680 [2024-07-22 20:47:04.507000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:39:52.680 qpair failed and we were unable to recover it. 00:39:52.680 [2024-07-22 20:47:04.516595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.680 [2024-07-22 20:47:04.516767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.680 [2024-07-22 20:47:04.516831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.680 [2024-07-22 20:47:04.516863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.680 [2024-07-22 20:47:04.516886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500038fe80 00:39:52.680 [2024-07-22 20:47:04.516947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:39:52.681 qpair failed and we were unable to recover it. 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Write completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Write completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Write completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Write completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Write completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Write completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Write completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Write completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Read completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 Write completed with error (sct=0, sc=8) 00:39:52.681 starting I/O failed 00:39:52.681 [2024-07-22 20:47:04.518016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:52.681 [2024-07-22 20:47:04.518045] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:39:52.681 A controller has encountered a failure and is being reset. 00:39:52.681 [2024-07-22 20:47:04.526835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.681 [2024-07-22 20:47:04.527035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.681 [2024-07-22 20:47:04.527119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.681 [2024-07-22 20:47:04.527160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.681 [2024-07-22 20:47:04.527189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:52.681 [2024-07-22 20:47:04.527278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:52.681 qpair failed and we were unable to recover it. 00:39:52.681 [2024-07-22 20:47:04.536599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.681 [2024-07-22 20:47:04.536728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.681 [2024-07-22 20:47:04.536771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.681 [2024-07-22 20:47:04.536796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.681 [2024-07-22 20:47:04.536815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:39:52.681 [2024-07-22 20:47:04.536861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:52.681 qpair failed and we were unable to recover it. 00:39:52.681 [2024-07-22 20:47:04.546627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.681 [2024-07-22 20:47:04.546759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.681 [2024-07-22 20:47:04.546792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.681 [2024-07-22 20:47:04.546808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.681 [2024-07-22 20:47:04.546819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000388b80 00:39:52.681 [2024-07-22 20:47:04.546849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:52.681 qpair failed and we were unable to recover it. 00:39:52.681 [2024-07-22 20:47:04.556594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:52.681 [2024-07-22 20:47:04.556684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:52.681 [2024-07-22 20:47:04.556717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:52.681 [2024-07-22 20:47:04.556733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:52.681 [2024-07-22 20:47:04.556744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000388b80 00:39:52.681 [2024-07-22 20:47:04.556773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:52.681 qpair failed and we were unable to recover it. 00:39:52.681 [2024-07-22 20:47:04.557067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000388680 (9): Bad file descriptor 00:39:52.681 Controller properly reset. 00:39:52.681 Initializing NVMe Controllers 00:39:52.681 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:52.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:52.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:39:52.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:39:52.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:39:52.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:39:52.681 Initialization complete. Launching workers. 00:39:52.681 Starting thread on core 1 00:39:52.681 Starting thread on core 2 00:39:52.681 Starting thread on core 3 00:39:52.681 Starting thread on core 0 00:39:52.681 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:39:52.681 00:39:52.681 real 0m11.606s 00:39:52.681 user 0m19.983s 00:39:52.681 sys 0m3.913s 00:39:52.681 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:52.681 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:52.681 ************************************ 00:39:52.681 END TEST nvmf_target_disconnect_tc2 00:39:52.681 ************************************ 00:39:52.681 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:39:52.681 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:39:52.681 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:39:52.681 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:39:52.681 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:52.681 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:39:52.681 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:52.681 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:52.942 rmmod nvme_tcp 00:39:52.942 rmmod nvme_fabrics 00:39:52.942 rmmod nvme_keyring 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3903398 ']' 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3903398 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3903398 ']' 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3903398 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3903398 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3903398' 00:39:52.942 killing process with pid 3903398 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3903398 00:39:52.942 20:47:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3903398 00:39:53.883 20:47:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:53.883 20:47:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:53.883 20:47:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:53.883 20:47:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:53.883 20:47:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:53.883 20:47:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.883 20:47:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:53.883 20:47:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:55.798 20:47:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:55.798 00:39:55.798 real 0m21.611s 00:39:55.798 user 0m49.289s 00:39:55.798 sys 0m9.480s 00:39:55.798 20:47:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:55.798 20:47:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:55.798 ************************************ 00:39:55.798 END TEST nvmf_target_disconnect 00:39:55.798 ************************************ 00:39:55.798 20:47:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:39:55.798 20:47:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:55.798 00:39:55.798 real 8m8.902s 00:39:55.798 user 18m16.493s 00:39:55.798 sys 2m18.546s 00:39:55.798 20:47:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:55.798 20:47:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.798 ************************************ 00:39:55.798 END TEST nvmf_host 00:39:55.798 ************************************ 00:39:55.798 20:47:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:39:55.798 00:39:55.798 real 30m58.229s 00:39:55.798 user 77m40.437s 00:39:55.798 sys 7m59.880s 00:39:55.798 20:47:07 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:55.798 20:47:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:55.798 ************************************ 00:39:55.798 END TEST nvmf_tcp 00:39:55.798 ************************************ 00:39:55.798 20:47:07 -- common/autotest_common.sh@1142 -- # return 0 00:39:55.798 20:47:07 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:39:55.798 20:47:07 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:55.798 20:47:07 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:55.798 20:47:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:55.798 20:47:07 -- common/autotest_common.sh@10 -- # set +x 00:39:55.798 ************************************ 00:39:55.798 START TEST spdkcli_nvmf_tcp 00:39:55.798 ************************************ 00:39:55.798 20:47:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:56.060 * Looking for test storage... 00:39:56.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3905236 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3905236 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3905236 ']' 00:39:56.060 20:47:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:56.061 20:47:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.061 20:47:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:56.061 20:47:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.061 20:47:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:56.061 20:47:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.061 [2024-07-22 20:47:08.000402] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:56.061 [2024-07-22 20:47:08.000509] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905236 ] 00:39:56.061 EAL: No free 2048 kB hugepages reported on node 1 00:39:56.322 [2024-07-22 20:47:08.113633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:56.322 [2024-07-22 20:47:08.291289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.322 [2024-07-22 20:47:08.291432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:56.894 20:47:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:56.894 20:47:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:39:56.894 20:47:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:56.894 20:47:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:56.894 20:47:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.894 20:47:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:56.894 20:47:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:56.894 20:47:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:56.894 20:47:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:56.894 20:47:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.894 20:47:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:56.894 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:56.894 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:56.894 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:56.894 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:56.894 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:56.894 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:56.894 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:56.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:56.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:56.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:56.894 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:56.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:56.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:56.894 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:56.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:56.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:56.894 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:56.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:56.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:56.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:56.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:56.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:56.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:56.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:56.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:56.895 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:56.895 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:56.895 ' 00:39:59.479 [2024-07-22 20:47:11.205822] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:00.422 [2024-07-22 20:47:12.369718] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:02.966 [2024-07-22 20:47:14.503814] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:04.350 [2024-07-22 20:47:16.337192] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:05.736 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:05.736 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:05.736 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:05.736 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:05.736 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:05.736 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:05.736 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:05.736 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:05.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:05.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:05.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:05.736 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:05.737 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:05.737 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:05.737 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:05.998 20:47:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:05.998 20:47:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:05.998 20:47:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.998 20:47:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:05.998 20:47:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:05.998 20:47:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.998 20:47:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:05.998 20:47:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:06.258 20:47:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:06.519 20:47:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:06.519 20:47:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:06.519 20:47:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:06.519 20:47:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:06.519 20:47:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:06.519 20:47:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:06.519 20:47:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:06.519 20:47:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:06.519 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:06.519 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:06.519 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:06.519 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:06.519 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:06.519 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:06.519 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:06.519 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:06.519 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:06.519 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:06.519 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:06.519 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:06.519 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:06.519 ' 00:40:11.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:11.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:11.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:11.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:11.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:11.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:11.809 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:11.809 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:11.809 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:11.809 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:11.809 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:11.809 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:11.809 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:11.809 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3905236 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3905236 ']' 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3905236 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3905236 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3905236' 00:40:11.809 killing process with pid 3905236 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3905236 00:40:11.809 20:47:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3905236 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3905236 ']' 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3905236 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3905236 ']' 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3905236 00:40:12.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3905236) - No such process 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3905236 is not found' 00:40:12.751 Process with pid 3905236 is not found 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:12.751 00:40:12.751 real 0m16.707s 00:40:12.751 user 0m33.736s 00:40:12.751 sys 0m0.845s 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:12.751 20:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:12.751 ************************************ 00:40:12.751 END TEST spdkcli_nvmf_tcp 00:40:12.751 ************************************ 00:40:12.751 20:47:24 -- common/autotest_common.sh@1142 -- # return 0 00:40:12.751 20:47:24 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:12.751 20:47:24 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:12.752 20:47:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:12.752 20:47:24 -- common/autotest_common.sh@10 -- # set +x 00:40:12.752 ************************************ 00:40:12.752 START TEST nvmf_identify_passthru 00:40:12.752 ************************************ 00:40:12.752 20:47:24 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:12.752 * Looking for test storage... 00:40:12.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:12.752 20:47:24 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:12.752 20:47:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:12.752 20:47:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:12.752 20:47:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:12.752 20:47:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.752 20:47:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.752 20:47:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.752 20:47:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:12.752 20:47:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:12.752 20:47:24 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:12.752 20:47:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:12.752 20:47:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:12.752 20:47:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:12.752 20:47:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.752 20:47:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.752 20:47:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.752 20:47:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:12.752 20:47:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.752 20:47:24 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.752 20:47:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:12.752 20:47:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:12.752 20:47:24 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:40:12.752 20:47:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:19.340 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:19.340 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:19.341 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:19.341 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:19.341 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:19.341 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:19.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:19.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:40:19.602 00:40:19.602 --- 10.0.0.2 ping statistics --- 00:40:19.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.602 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:19.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:19.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:40:19.602 00:40:19.602 --- 10.0.0.1 ping statistics --- 00:40:19.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.602 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:19.602 20:47:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:19.602 20:47:31 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:19.602 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:19.602 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:19.602 20:47:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:19.602 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:40:19.602 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:40:19.602 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:40:19.602 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:40:19.602 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:40:19.602 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:40:19.602 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:19.602 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:19.602 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:40:19.863 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:40:19.863 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:40:19.863 20:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:40:19.863 20:47:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:40:19.863 20:47:31 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:40:19.863 20:47:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:19.863 20:47:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:19.863 20:47:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:19.863 EAL: No free 2048 kB hugepages reported on node 1 00:40:20.435 20:47:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:40:20.435 20:47:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:20.435 20:47:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:20.435 20:47:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:20.435 EAL: No free 2048 kB hugepages reported on node 1 00:40:21.006 20:47:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:40:21.006 20:47:32 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:21.006 20:47:32 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:21.006 20:47:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:21.006 20:47:32 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:21.006 20:47:32 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:21.006 20:47:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:21.006 20:47:32 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3912317 00:40:21.006 20:47:32 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:21.006 20:47:32 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:21.006 20:47:32 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3912317 00:40:21.006 20:47:32 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3912317 ']' 00:40:21.006 20:47:32 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:21.006 20:47:32 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:21.006 20:47:32 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:21.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:21.006 20:47:32 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:21.006 20:47:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:21.006 [2024-07-22 20:47:32.962503] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:21.006 [2024-07-22 20:47:32.962608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:21.267 EAL: No free 2048 kB hugepages reported on node 1 00:40:21.267 [2024-07-22 20:47:33.083596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:21.267 [2024-07-22 20:47:33.265175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:21.267 [2024-07-22 20:47:33.265222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:21.267 [2024-07-22 20:47:33.265236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:21.267 [2024-07-22 20:47:33.265246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:21.267 [2024-07-22 20:47:33.265257] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:21.267 [2024-07-22 20:47:33.265491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:21.267 [2024-07-22 20:47:33.265578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:21.267 [2024-07-22 20:47:33.265686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:21.267 [2024-07-22 20:47:33.265713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:40:21.837 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:21.837 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:40:21.837 20:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:21.837 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.837 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:21.837 INFO: Log level set to 20 00:40:21.837 INFO: Requests: 00:40:21.837 { 00:40:21.837 "jsonrpc": "2.0", 00:40:21.837 "method": "nvmf_set_config", 00:40:21.837 "id": 1, 00:40:21.837 "params": { 00:40:21.837 "admin_cmd_passthru": { 00:40:21.837 "identify_ctrlr": true 00:40:21.837 } 00:40:21.837 } 00:40:21.837 } 00:40:21.837 00:40:21.837 INFO: response: 00:40:21.837 { 00:40:21.837 "jsonrpc": "2.0", 00:40:21.837 "id": 1, 00:40:21.837 "result": true 00:40:21.837 } 00:40:21.837 00:40:21.837 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.837 20:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:21.837 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.837 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:21.837 INFO: Setting log level to 20 00:40:21.837 INFO: Setting log level to 20 00:40:21.837 INFO: Log level set to 20 00:40:21.837 INFO: Log level set to 20 00:40:21.837 INFO: Requests: 00:40:21.837 { 00:40:21.837 "jsonrpc": "2.0", 00:40:21.837 "method": "framework_start_init", 00:40:21.837 "id": 1 00:40:21.837 } 00:40:21.837 00:40:21.837 INFO: Requests: 00:40:21.837 { 00:40:21.837 "jsonrpc": "2.0", 00:40:21.837 "method": "framework_start_init", 00:40:21.837 "id": 1 00:40:21.837 } 00:40:21.837 00:40:22.099 [2024-07-22 20:47:33.952776] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:22.099 INFO: response: 00:40:22.099 { 00:40:22.099 "jsonrpc": "2.0", 00:40:22.099 "id": 1, 00:40:22.099 "result": true 00:40:22.099 } 00:40:22.099 00:40:22.099 INFO: response: 00:40:22.099 { 00:40:22.099 "jsonrpc": "2.0", 00:40:22.099 "id": 1, 00:40:22.099 "result": true 00:40:22.099 } 00:40:22.099 00:40:22.099 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.099 20:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:22.099 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.099 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.099 INFO: Setting log level to 40 00:40:22.099 INFO: Setting log level to 40 00:40:22.099 INFO: Setting log level to 40 00:40:22.099 [2024-07-22 20:47:33.968168] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:22.099 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.099 20:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:22.099 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:22.099 20:47:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.099 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:40:22.099 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.099 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.360 Nvme0n1 00:40:22.360 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.360 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:22.360 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.360 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.360 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.360 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:22.360 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.360 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.360 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.360 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:22.360 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.360 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.621 [2024-07-22 20:47:34.383268] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:22.621 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.621 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:22.621 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.621 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.621 [ 00:40:22.621 { 00:40:22.621 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:22.621 "subtype": "Discovery", 00:40:22.621 "listen_addresses": [], 00:40:22.621 "allow_any_host": true, 00:40:22.621 "hosts": [] 00:40:22.621 }, 00:40:22.621 { 00:40:22.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:22.621 "subtype": "NVMe", 00:40:22.621 "listen_addresses": [ 00:40:22.621 { 00:40:22.621 "trtype": "TCP", 00:40:22.621 "adrfam": "IPv4", 00:40:22.621 "traddr": "10.0.0.2", 00:40:22.621 "trsvcid": "4420" 00:40:22.621 } 00:40:22.621 ], 00:40:22.621 "allow_any_host": true, 00:40:22.621 "hosts": [], 00:40:22.621 "serial_number": "SPDK00000000000001", 00:40:22.621 "model_number": "SPDK bdev Controller", 00:40:22.621 "max_namespaces": 1, 00:40:22.621 "min_cntlid": 1, 00:40:22.621 "max_cntlid": 65519, 00:40:22.621 "namespaces": [ 00:40:22.621 { 00:40:22.621 "nsid": 1, 00:40:22.621 "bdev_name": "Nvme0n1", 00:40:22.621 "name": "Nvme0n1", 00:40:22.621 "nguid": "36344730526054870025384500000044", 00:40:22.621 "uuid": "36344730-5260-5487-0025-384500000044" 00:40:22.621 } 00:40:22.621 ] 00:40:22.621 } 00:40:22.621 ] 00:40:22.621 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.621 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:22.621 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:22.621 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:22.621 EAL: No free 2048 kB hugepages reported on node 1 00:40:22.882 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:40:22.882 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:22.882 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:22.882 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:22.882 EAL: No free 2048 kB hugepages reported on node 1 00:40:23.143 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:40:23.143 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:40:23.143 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:40:23.143 20:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:23.143 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:23.143 20:47:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:23.143 20:47:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:23.143 20:47:35 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:23.143 20:47:35 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:23.143 20:47:35 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:23.143 20:47:35 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:40:23.143 20:47:35 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:23.143 20:47:35 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:40:23.143 20:47:35 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:23.143 20:47:35 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:23.143 rmmod nvme_tcp 00:40:23.143 rmmod nvme_fabrics 00:40:23.143 rmmod nvme_keyring 00:40:23.143 20:47:35 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:23.143 20:47:35 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:40:23.143 20:47:35 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:40:23.143 20:47:35 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3912317 ']' 00:40:23.143 20:47:35 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3912317 00:40:23.143 20:47:35 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3912317 ']' 00:40:23.143 20:47:35 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3912317 00:40:23.143 20:47:35 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:40:23.143 20:47:35 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:23.143 20:47:35 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3912317 00:40:23.143 20:47:35 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:23.143 20:47:35 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:23.143 20:47:35 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3912317' 00:40:23.143 killing process with pid 3912317 00:40:23.143 20:47:35 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3912317 00:40:23.143 20:47:35 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3912317 00:40:24.529 20:47:36 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:24.529 20:47:36 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:24.529 20:47:36 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:24.529 20:47:36 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:24.529 20:47:36 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:24.529 20:47:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:24.529 20:47:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:24.529 20:47:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:26.444 20:47:38 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:26.444 00:40:26.444 real 0m13.635s 00:40:26.444 user 0m12.877s 00:40:26.444 sys 0m6.004s 00:40:26.444 20:47:38 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:26.444 20:47:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:26.444 ************************************ 00:40:26.444 END TEST nvmf_identify_passthru 00:40:26.444 ************************************ 00:40:26.444 20:47:38 -- common/autotest_common.sh@1142 -- # return 0 00:40:26.444 20:47:38 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:26.444 20:47:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:26.444 20:47:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:26.444 20:47:38 -- common/autotest_common.sh@10 -- # set +x 00:40:26.444 ************************************ 00:40:26.444 START TEST nvmf_dif 00:40:26.444 ************************************ 00:40:26.444 20:47:38 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:26.444 * Looking for test storage... 00:40:26.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:26.444 20:47:38 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:26.444 20:47:38 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:26.445 20:47:38 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:26.445 20:47:38 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:26.445 20:47:38 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:26.445 20:47:38 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.445 20:47:38 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.445 20:47:38 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.445 20:47:38 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:26.445 20:47:38 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:26.445 20:47:38 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:26.445 20:47:38 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:26.445 20:47:38 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:26.445 20:47:38 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:26.445 20:47:38 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:26.445 20:47:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:26.445 20:47:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:26.445 20:47:38 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:40:26.445 20:47:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:33.033 20:47:44 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:33.034 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:33.034 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:33.034 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:33.034 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:33.034 20:47:44 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:33.295 20:47:45 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:33.295 20:47:45 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:33.295 20:47:45 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:33.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:33.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:40:33.295 00:40:33.295 --- 10.0.0.2 ping statistics --- 00:40:33.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.295 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:40:33.295 20:47:45 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:33.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:33.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:40:33.295 00:40:33.295 --- 10.0.0.1 ping statistics --- 00:40:33.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.295 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:40:33.295 20:47:45 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:33.295 20:47:45 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:40:33.295 20:47:45 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:40:33.295 20:47:45 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:36.693 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:40:36.693 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:36.693 20:47:48 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:36.693 20:47:48 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:36.693 20:47:48 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:36.693 20:47:48 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:36.693 20:47:48 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:36.693 20:47:48 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:36.693 20:47:48 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:36.693 20:47:48 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:36.693 20:47:48 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:36.693 20:47:48 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:36.693 20:47:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:36.693 20:47:48 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3918504 00:40:36.693 20:47:48 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3918504 00:40:36.693 20:47:48 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:36.693 20:47:48 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3918504 ']' 00:40:36.693 20:47:48 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:36.693 20:47:48 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:36.693 20:47:48 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:36.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:36.693 20:47:48 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:36.693 20:47:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:36.954 [2024-07-22 20:47:48.789227] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:36.954 [2024-07-22 20:47:48.789351] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:36.954 EAL: No free 2048 kB hugepages reported on node 1 00:40:36.954 [2024-07-22 20:47:48.920869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:37.215 [2024-07-22 20:47:49.100941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:37.215 [2024-07-22 20:47:49.100986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:37.215 [2024-07-22 20:47:49.101000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:37.215 [2024-07-22 20:47:49.101009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:37.215 [2024-07-22 20:47:49.101019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:37.215 [2024-07-22 20:47:49.101053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.787 20:47:49 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:37.787 20:47:49 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:40:37.787 20:47:49 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:37.787 20:47:49 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:37.787 20:47:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:37.787 20:47:49 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:37.787 20:47:49 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:37.787 20:47:49 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:37.787 20:47:49 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.787 20:47:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:37.787 [2024-07-22 20:47:49.560751] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:37.787 20:47:49 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:37.787 20:47:49 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:37.787 20:47:49 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:37.787 20:47:49 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:37.787 20:47:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:37.787 ************************************ 00:40:37.787 START TEST fio_dif_1_default 00:40:37.787 ************************************ 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:37.787 bdev_null0 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:37.787 [2024-07-22 20:47:49.641132] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:37.787 { 00:40:37.787 "params": { 00:40:37.787 "name": "Nvme$subsystem", 00:40:37.787 "trtype": "$TEST_TRANSPORT", 00:40:37.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:37.787 "adrfam": "ipv4", 00:40:37.787 "trsvcid": "$NVMF_PORT", 00:40:37.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:37.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:37.787 "hdgst": ${hdgst:-false}, 00:40:37.787 "ddgst": ${ddgst:-false} 00:40:37.787 }, 00:40:37.787 "method": "bdev_nvme_attach_controller" 00:40:37.787 } 00:40:37.787 EOF 00:40:37.787 )") 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:37.787 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:37.788 "params": { 00:40:37.788 "name": "Nvme0", 00:40:37.788 "trtype": "tcp", 00:40:37.788 "traddr": "10.0.0.2", 00:40:37.788 "adrfam": "ipv4", 00:40:37.788 "trsvcid": "4420", 00:40:37.788 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:37.788 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:37.788 "hdgst": false, 00:40:37.788 "ddgst": false 00:40:37.788 }, 00:40:37.788 "method": "bdev_nvme_attach_controller" 00:40:37.788 }' 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:37.788 20:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:38.356 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:38.356 fio-3.35 00:40:38.356 Starting 1 thread 00:40:38.356 EAL: No free 2048 kB hugepages reported on node 1 00:40:50.591 00:40:50.591 filename0: (groupid=0, jobs=1): err= 0: pid=3919026: Mon Jul 22 20:48:00 2024 00:40:50.591 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10003msec) 00:40:50.591 slat (nsec): min=5948, max=52413, avg=8007.24, stdev=2752.50 00:40:50.591 clat (usec): min=41835, max=43007, avg=42005.49, stdev=168.91 00:40:50.591 lat (usec): min=41841, max=43017, avg=42013.50, stdev=169.16 00:40:50.591 clat percentiles (usec): 00:40:50.591 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:40:50.591 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:40:50.591 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:50.591 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:40:50.591 | 99.99th=[43254] 00:40:50.591 bw ( KiB/s): min= 352, max= 384, per=99.82%, avg=380.63, stdev=10.09, samples=19 00:40:50.591 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:40:50.591 lat (msec) : 50=100.00% 00:40:50.591 cpu : usr=95.80%, sys=3.95%, ctx=14, majf=0, minf=1635 00:40:50.591 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:50.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.591 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:50.591 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:50.591 00:40:50.591 Run status group 0 (all jobs): 00:40:50.591 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10003-10003msec 00:40:50.591 ----------------------------------------------------- 00:40:50.591 Suppressions used: 00:40:50.591 count bytes template 00:40:50.591 1 8 /usr/src/fio/parse.c 00:40:50.591 1 8 libtcmalloc_minimal.so 00:40:50.591 1 904 libcrypto.so 00:40:50.591 ----------------------------------------------------- 00:40:50.591 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:50.591 00:40:50.591 real 0m12.051s 00:40:50.591 user 0m26.917s 00:40:50.591 sys 0m0.982s 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:50.591 ************************************ 00:40:50.591 END TEST fio_dif_1_default 00:40:50.591 ************************************ 00:40:50.591 20:48:01 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:50.591 20:48:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:50.591 20:48:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:50.591 20:48:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:50.591 20:48:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:50.591 ************************************ 00:40:50.591 START TEST fio_dif_1_multi_subsystems 00:40:50.591 ************************************ 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.591 bdev_null0 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:50.591 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.592 [2024-07-22 20:48:01.773585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.592 bdev_null1 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:50.592 { 00:40:50.592 "params": { 00:40:50.592 "name": "Nvme$subsystem", 00:40:50.592 "trtype": "$TEST_TRANSPORT", 00:40:50.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:50.592 "adrfam": "ipv4", 00:40:50.592 "trsvcid": "$NVMF_PORT", 00:40:50.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:50.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:50.592 "hdgst": ${hdgst:-false}, 00:40:50.592 "ddgst": ${ddgst:-false} 00:40:50.592 }, 00:40:50.592 "method": "bdev_nvme_attach_controller" 00:40:50.592 } 00:40:50.592 EOF 00:40:50.592 )") 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:50.592 { 00:40:50.592 "params": { 00:40:50.592 "name": "Nvme$subsystem", 00:40:50.592 "trtype": "$TEST_TRANSPORT", 00:40:50.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:50.592 "adrfam": "ipv4", 00:40:50.592 "trsvcid": "$NVMF_PORT", 00:40:50.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:50.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:50.592 "hdgst": ${hdgst:-false}, 00:40:50.592 "ddgst": ${ddgst:-false} 00:40:50.592 }, 00:40:50.592 "method": "bdev_nvme_attach_controller" 00:40:50.592 } 00:40:50.592 EOF 00:40:50.592 )") 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:50.592 "params": { 00:40:50.592 "name": "Nvme0", 00:40:50.592 "trtype": "tcp", 00:40:50.592 "traddr": "10.0.0.2", 00:40:50.592 "adrfam": "ipv4", 00:40:50.592 "trsvcid": "4420", 00:40:50.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:50.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:50.592 "hdgst": false, 00:40:50.592 "ddgst": false 00:40:50.592 }, 00:40:50.592 "method": "bdev_nvme_attach_controller" 00:40:50.592 },{ 00:40:50.592 "params": { 00:40:50.592 "name": "Nvme1", 00:40:50.592 "trtype": "tcp", 00:40:50.592 "traddr": "10.0.0.2", 00:40:50.592 "adrfam": "ipv4", 00:40:50.592 "trsvcid": "4420", 00:40:50.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:50.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:50.592 "hdgst": false, 00:40:50.592 "ddgst": false 00:40:50.592 }, 00:40:50.592 "method": "bdev_nvme_attach_controller" 00:40:50.592 }' 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:50.592 20:48:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:50.592 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:50.592 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:50.592 fio-3.35 00:40:50.592 Starting 2 threads 00:40:50.592 EAL: No free 2048 kB hugepages reported on node 1 00:41:02.821 00:41:02.821 filename0: (groupid=0, jobs=1): err= 0: pid=3921644: Mon Jul 22 20:48:13 2024 00:41:02.821 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10004msec) 00:41:02.821 slat (nsec): min=5932, max=51180, avg=8418.90, stdev=3431.17 00:41:02.821 clat (usec): min=40829, max=45008, avg=41833.15, stdev=428.93 00:41:02.821 lat (usec): min=40838, max=45059, avg=41841.57, stdev=428.71 00:41:02.821 clat percentiles (usec): 00:41:02.821 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:41:02.821 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:41:02.821 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:02.822 | 99.00th=[42206], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:41:02.822 | 99.99th=[44827] 00:41:02.822 bw ( KiB/s): min= 352, max= 416, per=50.14%, avg=382.32, stdev=12.95, samples=19 00:41:02.822 iops : min= 88, max= 104, avg=95.58, stdev= 3.24, samples=19 00:41:02.822 lat (msec) : 50=100.00% 00:41:02.822 cpu : usr=97.12%, sys=2.64%, ctx=16, majf=0, minf=1635 00:41:02.822 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.822 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.822 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:02.822 filename1: (groupid=0, jobs=1): err= 0: pid=3921645: Mon Jul 22 20:48:13 2024 00:41:02.822 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10039msec) 00:41:02.822 slat (nsec): min=5944, max=51615, avg=8247.60, stdev=3325.18 00:41:02.822 clat (usec): min=41140, max=44531, avg=41978.10, stdev=189.35 00:41:02.822 lat (usec): min=41146, max=44582, avg=41986.35, stdev=190.21 00:41:02.822 clat percentiles (usec): 00:41:02.822 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:41:02.822 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:41:02.822 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:02.822 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:41:02.822 | 99.99th=[44303] 00:41:02.822 bw ( KiB/s): min= 352, max= 384, per=49.88%, avg=380.80, stdev= 9.85, samples=20 00:41:02.822 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:41:02.822 lat (msec) : 50=100.00% 00:41:02.822 cpu : usr=97.16%, sys=2.59%, ctx=13, majf=0, minf=1635 00:41:02.822 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.822 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.822 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:02.822 00:41:02.822 Run status group 0 (all jobs): 00:41:02.822 READ: bw=762KiB/s (780kB/s), 381KiB/s-382KiB/s (390kB/s-391kB/s), io=7648KiB (7832kB), run=10004-10039msec 00:41:02.822 ----------------------------------------------------- 00:41:02.822 Suppressions used: 00:41:02.822 count bytes template 00:41:02.822 2 16 /usr/src/fio/parse.c 00:41:02.822 1 8 libtcmalloc_minimal.so 00:41:02.822 1 904 libcrypto.so 00:41:02.822 ----------------------------------------------------- 00:41:02.822 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:02.822 00:41:02.822 real 0m12.457s 00:41:02.822 user 0m35.322s 00:41:02.822 sys 0m1.062s 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:02.822 20:48:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.822 ************************************ 00:41:02.822 END TEST fio_dif_1_multi_subsystems 00:41:02.822 ************************************ 00:41:02.822 20:48:14 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:41:02.822 20:48:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:02.822 20:48:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:02.822 20:48:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:02.822 20:48:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:02.822 ************************************ 00:41:02.822 START TEST fio_dif_rand_params 00:41:02.822 ************************************ 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.822 bdev_null0 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.822 [2024-07-22 20:48:14.306076] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:02.822 { 00:41:02.822 "params": { 00:41:02.822 "name": "Nvme$subsystem", 00:41:02.822 "trtype": "$TEST_TRANSPORT", 00:41:02.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:02.822 "adrfam": "ipv4", 00:41:02.822 "trsvcid": "$NVMF_PORT", 00:41:02.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:02.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:02.822 "hdgst": ${hdgst:-false}, 00:41:02.822 "ddgst": ${ddgst:-false} 00:41:02.822 }, 00:41:02.822 "method": "bdev_nvme_attach_controller" 00:41:02.822 } 00:41:02.822 EOF 00:41:02.822 )") 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:02.822 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:02.823 "params": { 00:41:02.823 "name": "Nvme0", 00:41:02.823 "trtype": "tcp", 00:41:02.823 "traddr": "10.0.0.2", 00:41:02.823 "adrfam": "ipv4", 00:41:02.823 "trsvcid": "4420", 00:41:02.823 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:02.823 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:02.823 "hdgst": false, 00:41:02.823 "ddgst": false 00:41:02.823 }, 00:41:02.823 "method": "bdev_nvme_attach_controller" 00:41:02.823 }' 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:02.823 20:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.823 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:02.823 ... 00:41:02.823 fio-3.35 00:41:02.823 Starting 3 threads 00:41:02.823 EAL: No free 2048 kB hugepages reported on node 1 00:41:09.407 00:41:09.407 filename0: (groupid=0, jobs=1): err= 0: pid=3924587: Mon Jul 22 20:48:20 2024 00:41:09.407 read: IOPS=168, BW=21.0MiB/s (22.0MB/s)(105MiB/5009msec) 00:41:09.407 slat (nsec): min=6125, max=42747, avg=11044.69, stdev=2171.69 00:41:09.407 clat (usec): min=6853, max=95332, avg=17826.87, stdev=14596.22 00:41:09.407 lat (usec): min=6861, max=95345, avg=17837.91, stdev=14596.27 00:41:09.407 clat percentiles (usec): 00:41:09.407 | 1.00th=[ 7373], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[10290], 00:41:09.407 | 30.00th=[11076], 40.00th=[11994], 50.00th=[13042], 60.00th=[13829], 00:41:09.407 | 70.00th=[14746], 80.00th=[16188], 90.00th=[50594], 95.00th=[53216], 00:41:09.407 | 99.00th=[57410], 99.50th=[89654], 99.90th=[94897], 99.95th=[94897], 00:41:09.407 | 99.99th=[94897] 00:41:09.407 bw ( KiB/s): min=15616, max=27904, per=29.93%, avg=21478.40, stdev=4433.15, samples=10 00:41:09.407 iops : min= 122, max= 218, avg=167.80, stdev=34.63, samples=10 00:41:09.407 lat (msec) : 10=15.68%, 20=71.38%, 50=1.78%, 100=11.16% 00:41:09.407 cpu : usr=95.61%, sys=4.09%, ctx=11, majf=0, minf=1635 00:41:09.407 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.407 issued rwts: total=842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.407 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:09.407 filename0: (groupid=0, jobs=1): err= 0: pid=3924588: Mon Jul 22 20:48:20 2024 00:41:09.407 read: IOPS=175, BW=22.0MiB/s (23.0MB/s)(110MiB/5006msec) 00:41:09.407 slat (nsec): min=6211, max=44529, avg=11037.02, stdev=1856.55 00:41:09.407 clat (usec): min=6234, max=96560, avg=17046.93, stdev=14153.49 00:41:09.407 lat (usec): min=6246, max=96569, avg=17057.97, stdev=14153.36 00:41:09.407 clat percentiles (usec): 00:41:09.407 | 1.00th=[ 7177], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[10028], 00:41:09.407 | 30.00th=[10814], 40.00th=[11731], 50.00th=[12911], 60.00th=[13960], 00:41:09.407 | 70.00th=[15008], 80.00th=[16188], 90.00th=[49021], 95.00th=[53216], 00:41:09.407 | 99.00th=[57410], 99.50th=[92799], 99.90th=[96994], 99.95th=[96994], 00:41:09.407 | 99.99th=[96994] 00:41:09.407 bw ( KiB/s): min=13312, max=30464, per=32.27%, avg=23153.78, stdev=6555.69, samples=9 00:41:09.407 iops : min= 104, max= 238, avg=180.89, stdev=51.22, samples=9 00:41:09.407 lat (msec) : 10=19.77%, 20=69.77%, 50=0.80%, 100=9.66% 00:41:09.407 cpu : usr=96.30%, sys=3.44%, ctx=6, majf=0, minf=1637 00:41:09.407 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.407 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.407 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:09.407 filename0: (groupid=0, jobs=1): err= 0: pid=3924589: Mon Jul 22 20:48:20 2024 00:41:09.407 read: IOPS=217, BW=27.1MiB/s (28.4MB/s)(136MiB/5004msec) 00:41:09.407 slat (nsec): min=6066, max=47241, avg=13542.83, stdev=3283.62 00:41:09.407 clat (usec): min=4943, max=91420, avg=13804.92, stdev=12837.90 00:41:09.407 lat (usec): min=4958, max=91430, avg=13818.46, stdev=12837.78 00:41:09.407 clat percentiles (usec): 00:41:09.407 | 1.00th=[ 5538], 5.00th=[ 6063], 10.00th=[ 6980], 20.00th=[ 7767], 00:41:09.407 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10552], 00:41:09.407 | 70.00th=[11207], 80.00th=[12518], 90.00th=[47449], 95.00th=[49546], 00:41:09.407 | 99.00th=[52691], 99.50th=[55313], 99.90th=[90702], 99.95th=[91751], 00:41:09.407 | 99.99th=[91751] 00:41:09.407 bw ( KiB/s): min=24832, max=34304, per=39.36%, avg=28245.33, stdev=3093.26, samples=9 00:41:09.407 iops : min= 194, max= 268, avg=220.67, stdev=24.17, samples=9 00:41:09.407 lat (msec) : 10=52.76%, 20=36.92%, 50=5.71%, 100=4.60% 00:41:09.407 cpu : usr=96.14%, sys=3.54%, ctx=6, majf=0, minf=1638 00:41:09.407 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.407 issued rwts: total=1086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.407 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:09.407 00:41:09.407 Run status group 0 (all jobs): 00:41:09.407 READ: bw=70.1MiB/s (73.5MB/s), 21.0MiB/s-27.1MiB/s (22.0MB/s-28.4MB/s), io=351MiB (368MB), run=5004-5009msec 00:41:09.407 ----------------------------------------------------- 00:41:09.407 Suppressions used: 00:41:09.407 count bytes template 00:41:09.407 5 44 /usr/src/fio/parse.c 00:41:09.407 1 8 libtcmalloc_minimal.so 00:41:09.407 1 904 libcrypto.so 00:41:09.407 ----------------------------------------------------- 00:41:09.407 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.407 bdev_null0 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.407 [2024-07-22 20:48:21.301085] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:09.407 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.408 bdev_null1 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.408 bdev_null2 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:09.408 { 00:41:09.408 "params": { 00:41:09.408 "name": "Nvme$subsystem", 00:41:09.408 "trtype": "$TEST_TRANSPORT", 00:41:09.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:09.408 "adrfam": "ipv4", 00:41:09.408 "trsvcid": "$NVMF_PORT", 00:41:09.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:09.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:09.408 "hdgst": ${hdgst:-false}, 00:41:09.408 "ddgst": ${ddgst:-false} 00:41:09.408 }, 00:41:09.408 "method": "bdev_nvme_attach_controller" 00:41:09.408 } 00:41:09.408 EOF 00:41:09.408 )") 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:09.408 { 00:41:09.408 "params": { 00:41:09.408 "name": "Nvme$subsystem", 00:41:09.408 "trtype": "$TEST_TRANSPORT", 00:41:09.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:09.408 "adrfam": "ipv4", 00:41:09.408 "trsvcid": "$NVMF_PORT", 00:41:09.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:09.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:09.408 "hdgst": ${hdgst:-false}, 00:41:09.408 "ddgst": ${ddgst:-false} 00:41:09.408 }, 00:41:09.408 "method": "bdev_nvme_attach_controller" 00:41:09.408 } 00:41:09.408 EOF 00:41:09.408 )") 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:09.408 { 00:41:09.408 "params": { 00:41:09.408 "name": "Nvme$subsystem", 00:41:09.408 "trtype": "$TEST_TRANSPORT", 00:41:09.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:09.408 "adrfam": "ipv4", 00:41:09.408 "trsvcid": "$NVMF_PORT", 00:41:09.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:09.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:09.408 "hdgst": ${hdgst:-false}, 00:41:09.408 "ddgst": ${ddgst:-false} 00:41:09.408 }, 00:41:09.408 "method": "bdev_nvme_attach_controller" 00:41:09.408 } 00:41:09.408 EOF 00:41:09.408 )") 00:41:09.408 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:09.669 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:41:09.669 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:41:09.669 20:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:09.669 "params": { 00:41:09.669 "name": "Nvme0", 00:41:09.669 "trtype": "tcp", 00:41:09.669 "traddr": "10.0.0.2", 00:41:09.669 "adrfam": "ipv4", 00:41:09.669 "trsvcid": "4420", 00:41:09.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:09.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:09.669 "hdgst": false, 00:41:09.669 "ddgst": false 00:41:09.669 }, 00:41:09.669 "method": "bdev_nvme_attach_controller" 00:41:09.669 },{ 00:41:09.669 "params": { 00:41:09.669 "name": "Nvme1", 00:41:09.669 "trtype": "tcp", 00:41:09.669 "traddr": "10.0.0.2", 00:41:09.669 "adrfam": "ipv4", 00:41:09.669 "trsvcid": "4420", 00:41:09.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:09.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:09.669 "hdgst": false, 00:41:09.669 "ddgst": false 00:41:09.669 }, 00:41:09.669 "method": "bdev_nvme_attach_controller" 00:41:09.669 },{ 00:41:09.669 "params": { 00:41:09.669 "name": "Nvme2", 00:41:09.669 "trtype": "tcp", 00:41:09.669 "traddr": "10.0.0.2", 00:41:09.669 "adrfam": "ipv4", 00:41:09.669 "trsvcid": "4420", 00:41:09.669 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:09.669 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:09.669 "hdgst": false, 00:41:09.669 "ddgst": false 00:41:09.669 }, 00:41:09.669 "method": "bdev_nvme_attach_controller" 00:41:09.669 }' 00:41:09.669 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:09.669 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:09.669 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:41:09.669 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:09.669 20:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:09.931 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:09.931 ... 00:41:09.931 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:09.931 ... 00:41:09.931 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:09.931 ... 00:41:09.931 fio-3.35 00:41:09.931 Starting 24 threads 00:41:09.931 EAL: No free 2048 kB hugepages reported on node 1 00:41:22.174 00:41:22.174 filename0: (groupid=0, jobs=1): err= 0: pid=3926118: Mon Jul 22 20:48:33 2024 00:41:22.174 read: IOPS=457, BW=1831KiB/s (1875kB/s)(17.9MiB/10031msec) 00:41:22.174 slat (nsec): min=6265, max=46489, avg=8941.09, stdev=3065.96 00:41:22.174 clat (usec): min=1809, max=55336, avg=34869.31, stdev=5204.45 00:41:22.174 lat (usec): min=1821, max=55343, avg=34878.25, stdev=5204.02 00:41:22.174 clat percentiles (usec): 00:41:22.174 | 1.00th=[ 7242], 5.00th=[23987], 10.00th=[28443], 20.00th=[35914], 00:41:22.174 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.174 | 70.00th=[36439], 80.00th=[36439], 90.00th=[36963], 95.00th=[37487], 00:41:22.174 | 99.00th=[38011], 99.50th=[38536], 99.90th=[43254], 99.95th=[43779], 00:41:22.174 | 99.99th=[55313] 00:41:22.174 bw ( KiB/s): min= 1664, max= 2304, per=4.35%, avg=1830.40, stdev=150.31, samples=20 00:41:22.174 iops : min= 416, max= 576, avg=457.60, stdev=37.58, samples=20 00:41:22.174 lat (msec) : 2=0.35%, 4=0.35%, 10=0.41%, 20=2.22%, 50=96.62% 00:41:22.174 lat (msec) : 100=0.04% 00:41:22.174 cpu : usr=99.00%, sys=0.68%, ctx=54, majf=0, minf=1633 00:41:22.175 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:22.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.175 filename0: (groupid=0, jobs=1): err= 0: pid=3926119: Mon Jul 22 20:48:33 2024 00:41:22.175 read: IOPS=437, BW=1751KiB/s (1793kB/s)(17.1MiB/10016msec) 00:41:22.175 slat (nsec): min=6494, max=76506, avg=19002.03, stdev=11370.00 00:41:22.175 clat (usec): min=17155, max=47691, avg=36413.57, stdev=1975.15 00:41:22.175 lat (usec): min=17165, max=47737, avg=36432.57, stdev=1975.12 00:41:22.175 clat percentiles (usec): 00:41:22.175 | 1.00th=[27132], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:41:22.175 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.175 | 70.00th=[36439], 80.00th=[36963], 90.00th=[36963], 95.00th=[38011], 00:41:22.175 | 99.00th=[44303], 99.50th=[45876], 99.90th=[47449], 99.95th=[47449], 00:41:22.175 | 99.99th=[47449] 00:41:22.175 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1744.84, stdev=61.85, samples=19 00:41:22.175 iops : min= 416, max= 448, avg=436.21, stdev=15.46, samples=19 00:41:22.175 lat (msec) : 20=0.36%, 50=99.64% 00:41:22.175 cpu : usr=99.14%, sys=0.56%, ctx=13, majf=0, minf=1634 00:41:22.175 IO depths : 1=3.9%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:41:22.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.175 filename0: (groupid=0, jobs=1): err= 0: pid=3926120: Mon Jul 22 20:48:33 2024 00:41:22.175 read: IOPS=441, BW=1767KiB/s (1809kB/s)(17.3MiB/10008msec) 00:41:22.175 slat (nsec): min=6346, max=77481, avg=25711.01, stdev=12995.29 00:41:22.175 clat (usec): min=8089, max=84592, avg=36030.12, stdev=3878.17 00:41:22.175 lat (usec): min=8100, max=84616, avg=36055.83, stdev=3879.37 00:41:22.175 clat percentiles (usec): 00:41:22.175 | 1.00th=[23200], 5.00th=[30278], 10.00th=[35390], 20.00th=[35914], 00:41:22.175 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.175 | 70.00th=[36439], 80.00th=[36963], 90.00th=[36963], 95.00th=[38011], 00:41:22.175 | 99.00th=[44303], 99.50th=[55313], 99.90th=[66847], 99.95th=[66847], 00:41:22.175 | 99.99th=[84411] 00:41:22.175 bw ( KiB/s): min= 1539, max= 1984, per=4.17%, avg=1754.26, stdev=87.58, samples=19 00:41:22.175 iops : min= 384, max= 496, avg=438.53, stdev=22.00, samples=19 00:41:22.175 lat (msec) : 10=0.32%, 20=0.36%, 50=98.73%, 100=0.59% 00:41:22.175 cpu : usr=98.64%, sys=0.92%, ctx=111, majf=0, minf=1636 00:41:22.175 IO depths : 1=0.5%, 2=6.6%, 4=24.3%, 8=56.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:41:22.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 issued rwts: total=4420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.175 filename0: (groupid=0, jobs=1): err= 0: pid=3926121: Mon Jul 22 20:48:33 2024 00:41:22.175 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.1MiB/10005msec) 00:41:22.175 slat (nsec): min=6760, max=77803, avg=25735.75, stdev=11653.88 00:41:22.175 clat (usec): min=17101, max=69900, avg=36426.86, stdev=2573.64 00:41:22.175 lat (usec): min=17108, max=69933, avg=36452.59, stdev=2572.97 00:41:22.175 clat percentiles (usec): 00:41:22.175 | 1.00th=[34866], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:41:22.175 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.175 | 70.00th=[36439], 80.00th=[36439], 90.00th=[36963], 95.00th=[37487], 00:41:22.175 | 99.00th=[38536], 99.50th=[44303], 99.90th=[69731], 99.95th=[69731], 00:41:22.175 | 99.99th=[69731] 00:41:22.175 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1738.11, stdev=77.69, samples=19 00:41:22.175 iops : min= 384, max= 448, avg=434.53, stdev=19.42, samples=19 00:41:22.175 lat (msec) : 20=0.32%, 50=99.22%, 100=0.46% 00:41:22.175 cpu : usr=97.46%, sys=1.50%, ctx=112, majf=0, minf=1634 00:41:22.175 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:41:22.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.175 filename0: (groupid=0, jobs=1): err= 0: pid=3926122: Mon Jul 22 20:48:33 2024 00:41:22.175 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.1MiB/10009msec) 00:41:22.175 slat (usec): min=6, max=149, avg=16.28, stdev=10.52 00:41:22.175 clat (usec): min=25942, max=58171, avg=36533.36, stdev=1675.09 00:41:22.175 lat (usec): min=25965, max=58230, avg=36549.64, stdev=1674.59 00:41:22.175 clat percentiles (usec): 00:41:22.175 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[36439], 00:41:22.175 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.175 | 70.00th=[36439], 80.00th=[36963], 90.00th=[36963], 95.00th=[37487], 00:41:22.175 | 99.00th=[38536], 99.50th=[47449], 99.90th=[57934], 99.95th=[57934], 00:41:22.175 | 99.99th=[57934] 00:41:22.175 bw ( KiB/s): min= 1664, max= 1792, per=4.13%, avg=1738.11, stdev=64.93, samples=19 00:41:22.175 iops : min= 416, max= 448, avg=434.53, stdev=16.23, samples=19 00:41:22.175 lat (msec) : 50=99.63%, 100=0.37% 00:41:22.175 cpu : usr=98.80%, sys=0.83%, ctx=67, majf=0, minf=1637 00:41:22.175 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:22.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.175 filename0: (groupid=0, jobs=1): err= 0: pid=3926123: Mon Jul 22 20:48:33 2024 00:41:22.175 read: IOPS=432, BW=1729KiB/s (1771kB/s)(16.9MiB/10006msec) 00:41:22.175 slat (nsec): min=6355, max=72020, avg=20722.33, stdev=12247.60 00:41:22.175 clat (usec): min=7987, max=65925, avg=36845.37, stdev=4484.94 00:41:22.175 lat (usec): min=7997, max=65951, avg=36866.09, stdev=4483.77 00:41:22.175 clat percentiles (usec): 00:41:22.175 | 1.00th=[24249], 5.00th=[34866], 10.00th=[35914], 20.00th=[35914], 00:41:22.175 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.175 | 70.00th=[36439], 80.00th=[36963], 90.00th=[38011], 95.00th=[44303], 00:41:22.175 | 99.00th=[58459], 99.50th=[64226], 99.90th=[65799], 99.95th=[65799], 00:41:22.175 | 99.99th=[65799] 00:41:22.175 bw ( KiB/s): min= 1536, max= 1792, per=4.08%, avg=1716.21, stdev=68.84, samples=19 00:41:22.175 iops : min= 384, max= 448, avg=429.05, stdev=17.21, samples=19 00:41:22.175 lat (msec) : 10=0.23%, 20=0.42%, 50=97.02%, 100=2.33% 00:41:22.175 cpu : usr=98.70%, sys=1.00%, ctx=16, majf=0, minf=1636 00:41:22.175 IO depths : 1=2.1%, 2=5.2%, 4=16.2%, 8=64.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:41:22.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 complete : 0=0.0%, 4=92.6%, 8=3.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 issued rwts: total=4326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.175 filename0: (groupid=0, jobs=1): err= 0: pid=3926124: Mon Jul 22 20:48:33 2024 00:41:22.175 read: IOPS=440, BW=1762KiB/s (1804kB/s)(17.2MiB/10027msec) 00:41:22.175 slat (nsec): min=6446, max=89688, avg=14691.41, stdev=9090.90 00:41:22.175 clat (usec): min=16870, max=42818, avg=36202.00, stdev=2173.80 00:41:22.175 lat (usec): min=16880, max=42834, avg=36216.69, stdev=2173.38 00:41:22.175 clat percentiles (usec): 00:41:22.175 | 1.00th=[18482], 5.00th=[35390], 10.00th=[35914], 20.00th=[36439], 00:41:22.175 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.175 | 70.00th=[36439], 80.00th=[36963], 90.00th=[36963], 95.00th=[37487], 00:41:22.175 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:41:22.175 | 99.99th=[42730] 00:41:22.175 bw ( KiB/s): min= 1664, max= 1920, per=4.18%, avg=1760.00, stdev=81.75, samples=20 00:41:22.175 iops : min= 416, max= 480, avg=440.00, stdev=20.44, samples=20 00:41:22.175 lat (msec) : 20=1.09%, 50=98.91% 00:41:22.175 cpu : usr=97.37%, sys=1.61%, ctx=55, majf=0, minf=1635 00:41:22.175 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:22.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.175 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.175 filename0: (groupid=0, jobs=1): err= 0: pid=3926125: Mon Jul 22 20:48:33 2024 00:41:22.175 read: IOPS=442, BW=1771KiB/s (1813kB/s)(17.3MiB/10003msec) 00:41:22.175 slat (nsec): min=6149, max=85226, avg=17975.78, stdev=12145.48 00:41:22.175 clat (usec): min=15929, max=69982, avg=36007.50, stdev=3567.58 00:41:22.175 lat (usec): min=15940, max=70017, avg=36025.47, stdev=3568.55 00:41:22.175 clat percentiles (usec): 00:41:22.175 | 1.00th=[22414], 5.00th=[28705], 10.00th=[35390], 20.00th=[35914], 00:41:22.175 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.175 | 70.00th=[36439], 80.00th=[36963], 90.00th=[36963], 95.00th=[38011], 00:41:22.175 | 99.00th=[44827], 99.50th=[53740], 99.90th=[60556], 99.95th=[69731], 00:41:22.175 | 99.99th=[69731] 00:41:22.175 bw ( KiB/s): min= 1648, max= 2064, per=4.21%, avg=1770.11, stdev=100.09, samples=19 00:41:22.175 iops : min= 412, max= 516, avg=442.53, stdev=25.02, samples=19 00:41:22.175 lat (msec) : 20=0.47%, 50=98.89%, 100=0.63% 00:41:22.175 cpu : usr=99.07%, sys=0.64%, ctx=16, majf=0, minf=1635 00:41:22.176 IO depths : 1=2.4%, 2=7.1%, 4=21.6%, 8=58.5%, 16=10.4%, 32=0.0%, >=64=0.0% 00:41:22.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 complete : 0=0.0%, 4=93.7%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 issued rwts: total=4428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.176 filename1: (groupid=0, jobs=1): err= 0: pid=3926126: Mon Jul 22 20:48:33 2024 00:41:22.176 read: IOPS=448, BW=1796KiB/s (1839kB/s)(17.6MiB/10027msec) 00:41:22.176 slat (nsec): min=6225, max=93977, avg=13287.53, stdev=8882.48 00:41:22.176 clat (usec): min=14133, max=62761, avg=35525.12, stdev=5030.02 00:41:22.176 lat (usec): min=14141, max=62778, avg=35538.41, stdev=5030.94 00:41:22.176 clat percentiles (usec): 00:41:22.176 | 1.00th=[19006], 5.00th=[25560], 10.00th=[28705], 20.00th=[35914], 00:41:22.176 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.176 | 70.00th=[36439], 80.00th=[36963], 90.00th=[37487], 95.00th=[40109], 00:41:22.176 | 99.00th=[52167], 99.50th=[59507], 99.90th=[62653], 99.95th=[62653], 00:41:22.176 | 99.99th=[62653] 00:41:22.176 bw ( KiB/s): min= 1664, max= 1992, per=4.26%, avg=1794.40, stdev=86.17, samples=20 00:41:22.176 iops : min= 416, max= 498, avg=448.60, stdev=21.54, samples=20 00:41:22.176 lat (msec) : 20=1.27%, 50=97.25%, 100=1.49% 00:41:22.176 cpu : usr=98.23%, sys=1.07%, ctx=84, majf=0, minf=1639 00:41:22.176 IO depths : 1=3.6%, 2=7.4%, 4=16.9%, 8=62.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:41:22.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 complete : 0=0.0%, 4=92.1%, 8=3.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 issued rwts: total=4502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.176 filename1: (groupid=0, jobs=1): err= 0: pid=3926127: Mon Jul 22 20:48:33 2024 00:41:22.176 read: IOPS=435, BW=1743KiB/s (1785kB/s)(17.0MiB/10003msec) 00:41:22.176 slat (nsec): min=6284, max=89147, avg=22208.23, stdev=11737.70 00:41:22.176 clat (usec): min=25979, max=82571, avg=36522.72, stdev=2375.42 00:41:22.176 lat (usec): min=25992, max=82598, avg=36544.92, stdev=2374.92 00:41:22.176 clat percentiles (usec): 00:41:22.176 | 1.00th=[34866], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:41:22.176 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.176 | 70.00th=[36439], 80.00th=[36963], 90.00th=[36963], 95.00th=[37487], 00:41:22.176 | 99.00th=[38536], 99.50th=[50070], 99.90th=[69731], 99.95th=[69731], 00:41:22.176 | 99.99th=[82314] 00:41:22.176 bw ( KiB/s): min= 1536, max= 1840, per=4.14%, avg=1740.63, stdev=79.03, samples=19 00:41:22.176 iops : min= 384, max= 460, avg=435.16, stdev=19.76, samples=19 00:41:22.176 lat (msec) : 50=99.61%, 100=0.39% 00:41:22.176 cpu : usr=98.87%, sys=0.76%, ctx=59, majf=0, minf=1633 00:41:22.176 IO depths : 1=4.5%, 2=10.6%, 4=24.8%, 8=52.0%, 16=8.1%, 32=0.0%, >=64=0.0% 00:41:22.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 issued rwts: total=4358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.176 filename1: (groupid=0, jobs=1): err= 0: pid=3926128: Mon Jul 22 20:48:33 2024 00:41:22.176 read: IOPS=436, BW=1744KiB/s (1786kB/s)(17.1MiB/10017msec) 00:41:22.176 slat (nsec): min=6212, max=77167, avg=17370.90, stdev=10209.40 00:41:22.176 clat (usec): min=24696, max=52963, avg=36502.44, stdev=1705.92 00:41:22.176 lat (usec): min=24704, max=52997, avg=36519.81, stdev=1705.81 00:41:22.176 clat percentiles (usec): 00:41:22.176 | 1.00th=[34341], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:41:22.176 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.176 | 70.00th=[36439], 80.00th=[36963], 90.00th=[36963], 95.00th=[38011], 00:41:22.176 | 99.00th=[44303], 99.50th=[50594], 99.90th=[52691], 99.95th=[52691], 00:41:22.176 | 99.99th=[53216] 00:41:22.176 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1744.84, stdev=63.44, samples=19 00:41:22.176 iops : min= 416, max= 448, avg=436.21, stdev=15.86, samples=19 00:41:22.176 lat (msec) : 50=99.40%, 100=0.60% 00:41:22.176 cpu : usr=99.07%, sys=0.64%, ctx=15, majf=0, minf=1635 00:41:22.176 IO depths : 1=5.8%, 2=11.8%, 4=24.6%, 8=51.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:41:22.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.176 filename1: (groupid=0, jobs=1): err= 0: pid=3926129: Mon Jul 22 20:48:33 2024 00:41:22.176 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.1MiB/10006msec) 00:41:22.176 slat (usec): min=6, max=139, avg=29.10, stdev=15.68 00:41:22.176 clat (usec): min=25311, max=70688, avg=36415.58, stdev=1647.03 00:41:22.176 lat (usec): min=25320, max=70716, avg=36444.69, stdev=1645.17 00:41:22.176 clat percentiles (usec): 00:41:22.176 | 1.00th=[34866], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:41:22.176 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.176 | 70.00th=[36439], 80.00th=[36963], 90.00th=[36963], 95.00th=[37487], 00:41:22.176 | 99.00th=[38536], 99.50th=[44827], 99.90th=[54264], 99.95th=[54264], 00:41:22.176 | 99.99th=[70779] 00:41:22.176 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1744.84, stdev=61.85, samples=19 00:41:22.176 iops : min= 416, max= 448, avg=436.21, stdev=15.46, samples=19 00:41:22.176 lat (msec) : 50=99.63%, 100=0.37% 00:41:22.176 cpu : usr=99.07%, sys=0.60%, ctx=47, majf=0, minf=1635 00:41:22.176 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:41:22.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.176 filename1: (groupid=0, jobs=1): err= 0: pid=3926130: Mon Jul 22 20:48:33 2024 00:41:22.176 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.1MiB/10009msec) 00:41:22.176 slat (usec): min=6, max=144, avg=18.33, stdev= 9.98 00:41:22.176 clat (usec): min=26510, max=49948, avg=36474.67, stdev=1184.55 00:41:22.176 lat (usec): min=26522, max=49970, avg=36493.00, stdev=1184.08 00:41:22.176 clat percentiles (usec): 00:41:22.176 | 1.00th=[34866], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:41:22.176 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.176 | 70.00th=[36439], 80.00th=[36963], 90.00th=[36963], 95.00th=[38011], 00:41:22.176 | 99.00th=[38536], 99.50th=[44303], 99.90th=[50070], 99.95th=[50070], 00:41:22.176 | 99.99th=[50070] 00:41:22.176 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1744.84, stdev=63.44, samples=19 00:41:22.176 iops : min= 416, max= 448, avg=436.21, stdev=15.86, samples=19 00:41:22.176 lat (msec) : 50=100.00% 00:41:22.176 cpu : usr=97.93%, sys=1.14%, ctx=40, majf=0, minf=1636 00:41:22.176 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:22.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.176 filename1: (groupid=0, jobs=1): err= 0: pid=3926131: Mon Jul 22 20:48:33 2024 00:41:22.176 read: IOPS=446, BW=1788KiB/s (1831kB/s)(17.5MiB/10028msec) 00:41:22.176 slat (nsec): min=6348, max=81654, avg=17382.00, stdev=12338.99 00:41:22.176 clat (usec): min=10097, max=60870, avg=35679.05, stdev=4577.36 00:41:22.176 lat (usec): min=10113, max=60877, avg=35696.43, stdev=4578.85 00:41:22.176 clat percentiles (usec): 00:41:22.176 | 1.00th=[17171], 5.00th=[26608], 10.00th=[30540], 20.00th=[35914], 00:41:22.176 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.176 | 70.00th=[36439], 80.00th=[36439], 90.00th=[36963], 95.00th=[39584], 00:41:22.176 | 99.00th=[46400], 99.50th=[56886], 99.90th=[60556], 99.95th=[61080], 00:41:22.176 | 99.99th=[61080] 00:41:22.176 bw ( KiB/s): min= 1648, max= 2016, per=4.25%, avg=1786.40, stdev=86.60, samples=20 00:41:22.176 iops : min= 412, max= 504, avg=446.60, stdev=21.65, samples=20 00:41:22.176 lat (msec) : 20=1.34%, 50=97.95%, 100=0.71% 00:41:22.176 cpu : usr=97.86%, sys=1.29%, ctx=68, majf=0, minf=1640 00:41:22.176 IO depths : 1=2.6%, 2=5.4%, 4=14.7%, 8=67.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:41:22.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 complete : 0=0.0%, 4=91.3%, 8=3.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 issued rwts: total=4482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.176 filename1: (groupid=0, jobs=1): err= 0: pid=3926132: Mon Jul 22 20:48:33 2024 00:41:22.176 read: IOPS=437, BW=1751KiB/s (1793kB/s)(17.1MiB/10013msec) 00:41:22.176 slat (nsec): min=6289, max=56189, avg=14862.80, stdev=7761.64 00:41:22.176 clat (usec): min=17822, max=53382, avg=36407.13, stdev=1424.39 00:41:22.176 lat (usec): min=17831, max=53410, avg=36421.99, stdev=1424.32 00:41:22.176 clat percentiles (usec): 00:41:22.176 | 1.00th=[34341], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:41:22.176 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.176 | 70.00th=[36439], 80.00th=[36963], 90.00th=[36963], 95.00th=[37487], 00:41:22.176 | 99.00th=[39060], 99.50th=[39060], 99.90th=[43779], 99.95th=[43779], 00:41:22.176 | 99.99th=[53216] 00:41:22.176 bw ( KiB/s): min= 1664, max= 1792, per=4.16%, avg=1751.58, stdev=61.13, samples=19 00:41:22.176 iops : min= 416, max= 448, avg=437.89, stdev=15.28, samples=19 00:41:22.176 lat (msec) : 20=0.05%, 50=99.91%, 100=0.05% 00:41:22.176 cpu : usr=99.12%, sys=0.56%, ctx=56, majf=0, minf=1634 00:41:22.176 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:22.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.176 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.177 filename1: (groupid=0, jobs=1): err= 0: pid=3926133: Mon Jul 22 20:48:33 2024 00:41:22.177 read: IOPS=437, BW=1750KiB/s (1792kB/s)(17.1MiB/10021msec) 00:41:22.177 slat (nsec): min=6585, max=46693, avg=14627.02, stdev=6935.11 00:41:22.177 clat (usec): min=15619, max=56373, avg=36446.09, stdev=1754.36 00:41:22.177 lat (usec): min=15649, max=56401, avg=36460.72, stdev=1754.02 00:41:22.177 clat percentiles (usec): 00:41:22.177 | 1.00th=[34341], 5.00th=[35390], 10.00th=[35914], 20.00th=[36439], 00:41:22.177 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.177 | 70.00th=[36439], 80.00th=[36963], 90.00th=[37487], 95.00th=[37487], 00:41:22.177 | 99.00th=[39060], 99.50th=[39584], 99.90th=[54264], 99.95th=[56361], 00:41:22.177 | 99.99th=[56361] 00:41:22.177 bw ( KiB/s): min= 1660, max= 1792, per=4.15%, avg=1747.00, stdev=62.92, samples=20 00:41:22.177 iops : min= 415, max= 448, avg=436.75, stdev=15.73, samples=20 00:41:22.177 lat (msec) : 20=0.14%, 50=99.45%, 100=0.41% 00:41:22.177 cpu : usr=98.96%, sys=0.67%, ctx=47, majf=0, minf=1636 00:41:22.177 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:22.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.177 filename2: (groupid=0, jobs=1): err= 0: pid=3926134: Mon Jul 22 20:48:33 2024 00:41:22.177 read: IOPS=425, BW=1701KiB/s (1742kB/s)(16.6MiB/10005msec) 00:41:22.177 slat (nsec): min=6088, max=82449, avg=22471.65, stdev=15439.84 00:41:22.177 clat (usec): min=7529, max=65588, avg=37397.69, stdev=4901.71 00:41:22.177 lat (usec): min=7535, max=65611, avg=37420.16, stdev=4899.50 00:41:22.177 clat percentiles (usec): 00:41:22.177 | 1.00th=[24773], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:41:22.177 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.177 | 70.00th=[36439], 80.00th=[36963], 90.00th=[43254], 95.00th=[46924], 00:41:22.177 | 99.00th=[56886], 99.50th=[62653], 99.90th=[65799], 99.95th=[65799], 00:41:22.177 | 99.99th=[65799] 00:41:22.177 bw ( KiB/s): min= 1392, max= 1792, per=4.03%, avg=1694.89, stdev=117.90, samples=19 00:41:22.177 iops : min= 348, max= 448, avg=423.68, stdev=29.53, samples=19 00:41:22.177 lat (msec) : 10=0.14%, 20=0.40%, 50=96.05%, 100=3.41% 00:41:22.177 cpu : usr=97.80%, sys=1.25%, ctx=132, majf=0, minf=1636 00:41:22.177 IO depths : 1=4.4%, 2=9.0%, 4=20.9%, 8=56.9%, 16=8.7%, 32=0.0%, >=64=0.0% 00:41:22.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 complete : 0=0.0%, 4=93.3%, 8=1.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 issued rwts: total=4255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.177 filename2: (groupid=0, jobs=1): err= 0: pid=3926135: Mon Jul 22 20:48:33 2024 00:41:22.177 read: IOPS=437, BW=1749KiB/s (1791kB/s)(17.1MiB/10015msec) 00:41:22.177 slat (nsec): min=6296, max=87507, avg=24854.25, stdev=13889.19 00:41:22.177 clat (usec): min=16934, max=55029, avg=36372.17, stdev=2439.69 00:41:22.177 lat (usec): min=16950, max=55036, avg=36397.02, stdev=2439.06 00:41:22.177 clat percentiles (usec): 00:41:22.177 | 1.00th=[26084], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:41:22.177 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.177 | 70.00th=[36439], 80.00th=[36439], 90.00th=[36963], 95.00th=[38011], 00:41:22.177 | 99.00th=[46400], 99.50th=[49021], 99.90th=[53740], 99.95th=[54789], 00:41:22.177 | 99.99th=[54789] 00:41:22.177 bw ( KiB/s): min= 1664, max= 1792, per=4.14%, avg=1742.32, stdev=56.92, samples=19 00:41:22.177 iops : min= 416, max= 448, avg=435.58, stdev=14.23, samples=19 00:41:22.177 lat (msec) : 20=0.32%, 50=99.22%, 100=0.46% 00:41:22.177 cpu : usr=98.86%, sys=0.81%, ctx=48, majf=0, minf=1636 00:41:22.177 IO depths : 1=3.9%, 2=9.5%, 4=23.7%, 8=54.0%, 16=9.0%, 32=0.0%, >=64=0.0% 00:41:22.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 issued rwts: total=4378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.177 filename2: (groupid=0, jobs=1): err= 0: pid=3926136: Mon Jul 22 20:48:33 2024 00:41:22.177 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10006msec) 00:41:22.177 slat (nsec): min=6478, max=73326, avg=25422.53, stdev=11994.54 00:41:22.177 clat (usec): min=6198, max=65328, avg=36276.94, stdev=2849.50 00:41:22.177 lat (usec): min=6205, max=65365, avg=36302.36, stdev=2849.85 00:41:22.177 clat percentiles (usec): 00:41:22.177 | 1.00th=[26346], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:41:22.177 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.177 | 70.00th=[36439], 80.00th=[36439], 90.00th=[36963], 95.00th=[37487], 00:41:22.177 | 99.00th=[38536], 99.50th=[38536], 99.90th=[65274], 99.95th=[65274], 00:41:22.177 | 99.99th=[65274] 00:41:22.177 bw ( KiB/s): min= 1539, max= 1792, per=4.13%, avg=1738.26, stdev=77.26, samples=19 00:41:22.177 iops : min= 384, max= 448, avg=434.53, stdev=19.42, samples=19 00:41:22.177 lat (msec) : 10=0.36%, 20=0.36%, 50=98.91%, 100=0.36% 00:41:22.177 cpu : usr=98.70%, sys=0.87%, ctx=58, majf=0, minf=1634 00:41:22.177 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:22.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.177 filename2: (groupid=0, jobs=1): err= 0: pid=3926137: Mon Jul 22 20:48:33 2024 00:41:22.177 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.1MiB/10007msec) 00:41:22.177 slat (nsec): min=6565, max=42409, avg=13383.89, stdev=5851.36 00:41:22.177 clat (usec): min=17396, max=60515, avg=36529.83, stdev=1918.28 00:41:22.177 lat (usec): min=17407, max=60535, avg=36543.22, stdev=1918.25 00:41:22.177 clat percentiles (usec): 00:41:22.177 | 1.00th=[34341], 5.00th=[35390], 10.00th=[35914], 20.00th=[36439], 00:41:22.177 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.177 | 70.00th=[36439], 80.00th=[36963], 90.00th=[37487], 95.00th=[37487], 00:41:22.177 | 99.00th=[39060], 99.50th=[57934], 99.90th=[59507], 99.95th=[60031], 00:41:22.177 | 99.99th=[60556] 00:41:22.177 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1744.84, stdev=61.85, samples=19 00:41:22.177 iops : min= 416, max= 448, avg=436.21, stdev=15.46, samples=19 00:41:22.177 lat (msec) : 20=0.14%, 50=99.36%, 100=0.50% 00:41:22.177 cpu : usr=97.82%, sys=1.25%, ctx=141, majf=0, minf=1633 00:41:22.177 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:22.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.177 filename2: (groupid=0, jobs=1): err= 0: pid=3926138: Mon Jul 22 20:48:33 2024 00:41:22.177 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.1MiB/10009msec) 00:41:22.177 slat (nsec): min=6175, max=70618, avg=16590.90, stdev=8232.79 00:41:22.177 clat (usec): min=25595, max=50073, avg=36506.52, stdev=1802.69 00:41:22.177 lat (usec): min=25604, max=50126, avg=36523.11, stdev=1802.99 00:41:22.177 clat percentiles (usec): 00:41:22.177 | 1.00th=[29230], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:41:22.177 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.177 | 70.00th=[36439], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:41:22.177 | 99.00th=[45351], 99.50th=[46924], 99.90th=[50070], 99.95th=[50070], 00:41:22.177 | 99.99th=[50070] 00:41:22.177 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1744.84, stdev=60.22, samples=19 00:41:22.177 iops : min= 416, max= 448, avg=436.21, stdev=15.05, samples=19 00:41:22.177 lat (msec) : 50=99.89%, 100=0.11% 00:41:22.177 cpu : usr=99.19%, sys=0.49%, ctx=28, majf=0, minf=1634 00:41:22.177 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:41:22.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.177 filename2: (groupid=0, jobs=1): err= 0: pid=3926139: Mon Jul 22 20:48:33 2024 00:41:22.177 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.1MiB/10008msec) 00:41:22.177 slat (nsec): min=6520, max=79109, avg=26082.98, stdev=11109.70 00:41:22.177 clat (usec): min=7916, max=67802, avg=36394.85, stdev=2404.54 00:41:22.177 lat (usec): min=7931, max=67828, avg=36420.94, stdev=2403.88 00:41:22.177 clat percentiles (usec): 00:41:22.177 | 1.00th=[34866], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:41:22.177 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.177 | 70.00th=[36439], 80.00th=[36439], 90.00th=[36963], 95.00th=[37487], 00:41:22.177 | 99.00th=[38536], 99.50th=[38536], 99.90th=[67634], 99.95th=[67634], 00:41:22.177 | 99.99th=[67634] 00:41:22.177 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1738.11, stdev=77.69, samples=19 00:41:22.177 iops : min= 384, max= 448, avg=434.53, stdev=19.42, samples=19 00:41:22.177 lat (msec) : 10=0.02%, 20=0.37%, 50=99.24%, 100=0.37% 00:41:22.177 cpu : usr=98.92%, sys=0.70%, ctx=155, majf=0, minf=1636 00:41:22.177 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:22.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.177 issued rwts: total=4369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.178 filename2: (groupid=0, jobs=1): err= 0: pid=3926140: Mon Jul 22 20:48:33 2024 00:41:22.178 read: IOPS=448, BW=1793KiB/s (1836kB/s)(17.5MiB/10006msec) 00:41:22.178 slat (usec): min=6, max=122, avg=22.94, stdev=16.00 00:41:22.178 clat (usec): min=7905, max=65290, avg=35488.28, stdev=4778.48 00:41:22.178 lat (usec): min=7925, max=65318, avg=35511.23, stdev=4780.49 00:41:22.178 clat percentiles (usec): 00:41:22.178 | 1.00th=[21890], 5.00th=[26084], 10.00th=[28967], 20.00th=[35914], 00:41:22.178 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:41:22.178 | 70.00th=[36439], 80.00th=[36439], 90.00th=[37487], 95.00th=[39060], 00:41:22.178 | 99.00th=[51119], 99.50th=[58983], 99.90th=[65274], 99.95th=[65274], 00:41:22.178 | 99.99th=[65274] 00:41:22.178 bw ( KiB/s): min= 1539, max= 1984, per=4.23%, avg=1780.37, stdev=115.13, samples=19 00:41:22.178 iops : min= 384, max= 496, avg=445.05, stdev=28.87, samples=19 00:41:22.178 lat (msec) : 10=0.36%, 20=0.49%, 50=98.08%, 100=1.07% 00:41:22.178 cpu : usr=99.09%, sys=0.62%, ctx=14, majf=0, minf=1634 00:41:22.178 IO depths : 1=4.3%, 2=9.0%, 4=19.4%, 8=58.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:41:22.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.178 complete : 0=0.0%, 4=92.6%, 8=2.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.178 issued rwts: total=4484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.178 filename2: (groupid=0, jobs=1): err= 0: pid=3926141: Mon Jul 22 20:48:33 2024 00:41:22.178 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.1MiB/10005msec) 00:41:22.178 slat (nsec): min=6011, max=83212, avg=15592.26, stdev=12586.43 00:41:22.178 clat (usec): min=26064, max=62372, avg=36514.60, stdev=1969.29 00:41:22.178 lat (usec): min=26071, max=62397, avg=36530.19, stdev=1968.25 00:41:22.178 clat percentiles (usec): 00:41:22.178 | 1.00th=[28443], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:41:22.178 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36439], 00:41:22.178 | 70.00th=[36439], 80.00th=[36963], 90.00th=[36963], 95.00th=[38011], 00:41:22.178 | 99.00th=[38536], 99.50th=[46400], 99.90th=[62129], 99.95th=[62129], 00:41:22.178 | 99.99th=[62129] 00:41:22.178 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1745.00, stdev=63.23, samples=19 00:41:22.178 iops : min= 416, max= 448, avg=436.21, stdev=15.86, samples=19 00:41:22.178 lat (msec) : 50=99.63%, 100=0.37% 00:41:22.178 cpu : usr=96.64%, sys=1.71%, ctx=79, majf=0, minf=1634 00:41:22.178 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:22.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.178 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.178 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:22.178 00:41:22.178 Run status group 0 (all jobs): 00:41:22.178 READ: bw=41.1MiB/s (43.1MB/s), 1701KiB/s-1831KiB/s (1742kB/s-1875kB/s), io=412MiB (432MB), run=10003-10031msec 00:41:22.178 ----------------------------------------------------- 00:41:22.178 Suppressions used: 00:41:22.178 count bytes template 00:41:22.178 45 402 /usr/src/fio/parse.c 00:41:22.178 1 8 libtcmalloc_minimal.so 00:41:22.178 1 904 libcrypto.so 00:41:22.178 ----------------------------------------------------- 00:41:22.178 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 bdev_null0 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 [2024-07-22 20:48:33.911375] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 bdev_null1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.178 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:22.179 { 00:41:22.179 "params": { 00:41:22.179 "name": "Nvme$subsystem", 00:41:22.179 "trtype": "$TEST_TRANSPORT", 00:41:22.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:22.179 "adrfam": "ipv4", 00:41:22.179 "trsvcid": "$NVMF_PORT", 00:41:22.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:22.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:22.179 "hdgst": ${hdgst:-false}, 00:41:22.179 "ddgst": ${ddgst:-false} 00:41:22.179 }, 00:41:22.179 "method": "bdev_nvme_attach_controller" 00:41:22.179 } 00:41:22.179 EOF 00:41:22.179 )") 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:22.179 { 00:41:22.179 "params": { 00:41:22.179 "name": "Nvme$subsystem", 00:41:22.179 "trtype": "$TEST_TRANSPORT", 00:41:22.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:22.179 "adrfam": "ipv4", 00:41:22.179 "trsvcid": "$NVMF_PORT", 00:41:22.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:22.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:22.179 "hdgst": ${hdgst:-false}, 00:41:22.179 "ddgst": ${ddgst:-false} 00:41:22.179 }, 00:41:22.179 "method": "bdev_nvme_attach_controller" 00:41:22.179 } 00:41:22.179 EOF 00:41:22.179 )") 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:41:22.179 20:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:22.179 "params": { 00:41:22.179 "name": "Nvme0", 00:41:22.179 "trtype": "tcp", 00:41:22.179 "traddr": "10.0.0.2", 00:41:22.179 "adrfam": "ipv4", 00:41:22.179 "trsvcid": "4420", 00:41:22.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:22.179 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:22.179 "hdgst": false, 00:41:22.179 "ddgst": false 00:41:22.179 }, 00:41:22.179 "method": "bdev_nvme_attach_controller" 00:41:22.179 },{ 00:41:22.179 "params": { 00:41:22.179 "name": "Nvme1", 00:41:22.179 "trtype": "tcp", 00:41:22.179 "traddr": "10.0.0.2", 00:41:22.179 "adrfam": "ipv4", 00:41:22.179 "trsvcid": "4420", 00:41:22.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:22.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:22.179 "hdgst": false, 00:41:22.179 "ddgst": false 00:41:22.179 }, 00:41:22.179 "method": "bdev_nvme_attach_controller" 00:41:22.179 }' 00:41:22.179 20:48:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:22.179 20:48:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:22.179 20:48:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:41:22.179 20:48:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:22.179 20:48:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:22.440 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:22.440 ... 00:41:22.440 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:22.440 ... 00:41:22.440 fio-3.35 00:41:22.440 Starting 4 threads 00:41:22.440 EAL: No free 2048 kB hugepages reported on node 1 00:41:29.057 00:41:29.057 filename0: (groupid=0, jobs=1): err= 0: pid=3928546: Mon Jul 22 20:48:40 2024 00:41:29.057 read: IOPS=1912, BW=14.9MiB/s (15.7MB/s)(74.8MiB/5003msec) 00:41:29.057 slat (nsec): min=5936, max=59569, avg=7455.82, stdev=2501.40 00:41:29.057 clat (usec): min=1691, max=6663, avg=4161.09, stdev=595.92 00:41:29.057 lat (usec): min=1699, max=6670, avg=4168.55, stdev=595.81 00:41:29.057 clat percentiles (usec): 00:41:29.057 | 1.00th=[ 2802], 5.00th=[ 3228], 10.00th=[ 3458], 20.00th=[ 3752], 00:41:29.057 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4178], 60.00th=[ 4228], 00:41:29.057 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4948], 95.00th=[ 5342], 00:41:29.057 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 6521], 99.95th=[ 6652], 00:41:29.057 | 99.99th=[ 6652] 00:41:29.057 bw ( KiB/s): min=14480, max=15648, per=25.65%, avg=15300.80, stdev=317.42, samples=10 00:41:29.057 iops : min= 1810, max= 1956, avg=1912.60, stdev=39.68, samples=10 00:41:29.057 lat (msec) : 2=0.01%, 4=36.71%, 10=63.28% 00:41:29.057 cpu : usr=96.96%, sys=2.68%, ctx=42, majf=0, minf=1636 00:41:29.057 IO depths : 1=0.4%, 2=2.4%, 4=68.3%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:29.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.057 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.057 issued rwts: total=9569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:29.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:29.057 filename0: (groupid=0, jobs=1): err= 0: pid=3928547: Mon Jul 22 20:48:40 2024 00:41:29.057 read: IOPS=1857, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5001msec) 00:41:29.057 slat (nsec): min=5938, max=38705, avg=7224.76, stdev=2123.14 00:41:29.057 clat (usec): min=1799, max=8255, avg=4286.56, stdev=658.71 00:41:29.057 lat (usec): min=1805, max=8294, avg=4293.79, stdev=658.70 00:41:29.057 clat percentiles (usec): 00:41:29.057 | 1.00th=[ 2966], 5.00th=[ 3490], 10.00th=[ 3654], 20.00th=[ 3884], 00:41:29.057 | 30.00th=[ 4015], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:41:29.057 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 5145], 95.00th=[ 5800], 00:41:29.057 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 7635], 99.95th=[ 7963], 00:41:29.057 | 99.99th=[ 8225] 00:41:29.057 bw ( KiB/s): min=14608, max=15296, per=24.94%, avg=14876.44, stdev=237.29, samples=9 00:41:29.057 iops : min= 1826, max= 1912, avg=1859.56, stdev=29.66, samples=9 00:41:29.057 lat (msec) : 2=0.02%, 4=29.51%, 10=70.47% 00:41:29.057 cpu : usr=96.50%, sys=3.22%, ctx=8, majf=0, minf=1637 00:41:29.057 IO depths : 1=0.5%, 2=1.5%, 4=70.8%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:29.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.057 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.057 issued rwts: total=9288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:29.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:29.057 filename1: (groupid=0, jobs=1): err= 0: pid=3928548: Mon Jul 22 20:48:40 2024 00:41:29.057 read: IOPS=1834, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5004msec) 00:41:29.057 slat (nsec): min=5942, max=99156, avg=7387.68, stdev=2659.29 00:41:29.057 clat (usec): min=1640, max=7506, avg=4338.84, stdev=655.93 00:41:29.057 lat (usec): min=1647, max=7512, avg=4346.23, stdev=655.82 00:41:29.057 clat percentiles (usec): 00:41:29.057 | 1.00th=[ 2933], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 3916], 00:41:29.057 | 30.00th=[ 4047], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:29.057 | 70.00th=[ 4359], 80.00th=[ 4686], 90.00th=[ 5276], 95.00th=[ 5669], 00:41:29.057 | 99.00th=[ 6390], 99.50th=[ 6587], 99.90th=[ 7308], 99.95th=[ 7373], 00:41:29.057 | 99.99th=[ 7504] 00:41:29.057 bw ( KiB/s): min=14272, max=15056, per=24.60%, avg=14676.80, stdev=197.27, samples=10 00:41:29.057 iops : min= 1784, max= 1882, avg=1834.60, stdev=24.66, samples=10 00:41:29.057 lat (msec) : 2=0.03%, 4=26.45%, 10=73.52% 00:41:29.057 cpu : usr=96.84%, sys=2.90%, ctx=9, majf=0, minf=1636 00:41:29.057 IO depths : 1=0.4%, 2=1.9%, 4=69.5%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:29.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.057 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.057 issued rwts: total=9181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:29.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:29.057 filename1: (groupid=0, jobs=1): err= 0: pid=3928549: Mon Jul 22 20:48:40 2024 00:41:29.057 read: IOPS=1854, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5002msec) 00:41:29.057 slat (nsec): min=5939, max=42851, avg=7207.40, stdev=2182.00 00:41:29.057 clat (usec): min=1891, max=8040, avg=4294.38, stdev=695.22 00:41:29.057 lat (usec): min=1897, max=8046, avg=4301.59, stdev=695.14 00:41:29.057 clat percentiles (usec): 00:41:29.057 | 1.00th=[ 2868], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3884], 00:41:29.057 | 30.00th=[ 3982], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:41:29.057 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 5211], 95.00th=[ 5866], 00:41:29.057 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 7046], 99.95th=[ 7635], 00:41:29.057 | 99.99th=[ 8029] 00:41:29.057 bw ( KiB/s): min=14108, max=15136, per=24.86%, avg=14828.40, stdev=301.04, samples=10 00:41:29.057 iops : min= 1763, max= 1892, avg=1853.50, stdev=37.76, samples=10 00:41:29.057 lat (msec) : 2=0.08%, 4=30.44%, 10=69.48% 00:41:29.057 cpu : usr=96.64%, sys=3.08%, ctx=13, majf=0, minf=1636 00:41:29.057 IO depths : 1=0.2%, 2=0.9%, 4=70.2%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:29.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.057 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.057 issued rwts: total=9274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:29.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:29.057 00:41:29.057 Run status group 0 (all jobs): 00:41:29.057 READ: bw=58.3MiB/s (61.1MB/s), 14.3MiB/s-14.9MiB/s (15.0MB/s-15.7MB/s), io=292MiB (306MB), run=5001-5004msec 00:41:29.057 ----------------------------------------------------- 00:41:29.057 Suppressions used: 00:41:29.057 count bytes template 00:41:29.057 6 52 /usr/src/fio/parse.c 00:41:29.057 1 8 libtcmalloc_minimal.so 00:41:29.057 1 904 libcrypto.so 00:41:29.057 ----------------------------------------------------- 00:41:29.057 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:29.057 00:41:29.057 real 0m26.598s 00:41:29.057 user 5m18.693s 00:41:29.057 sys 0m5.162s 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:29.057 20:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.057 ************************************ 00:41:29.057 END TEST fio_dif_rand_params 00:41:29.057 ************************************ 00:41:29.057 20:48:40 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:41:29.057 20:48:40 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:29.057 20:48:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:29.058 20:48:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:29.058 20:48:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:29.058 ************************************ 00:41:29.058 START TEST fio_dif_digest 00:41:29.058 ************************************ 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:29.058 bdev_null0 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:29.058 [2024-07-22 20:48:40.977048] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:29.058 { 00:41:29.058 "params": { 00:41:29.058 "name": "Nvme$subsystem", 00:41:29.058 "trtype": "$TEST_TRANSPORT", 00:41:29.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:29.058 "adrfam": "ipv4", 00:41:29.058 "trsvcid": "$NVMF_PORT", 00:41:29.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:29.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:29.058 "hdgst": ${hdgst:-false}, 00:41:29.058 "ddgst": ${ddgst:-false} 00:41:29.058 }, 00:41:29.058 "method": "bdev_nvme_attach_controller" 00:41:29.058 } 00:41:29.058 EOF 00:41:29.058 )") 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:41:29.058 20:48:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:29.058 "params": { 00:41:29.058 "name": "Nvme0", 00:41:29.058 "trtype": "tcp", 00:41:29.058 "traddr": "10.0.0.2", 00:41:29.058 "adrfam": "ipv4", 00:41:29.058 "trsvcid": "4420", 00:41:29.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:29.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:29.058 "hdgst": true, 00:41:29.058 "ddgst": true 00:41:29.058 }, 00:41:29.058 "method": "bdev_nvme_attach_controller" 00:41:29.058 }' 00:41:29.058 20:48:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:29.058 20:48:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:29.058 20:48:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:41:29.058 20:48:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:29.058 20:48:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:29.625 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:29.625 ... 00:41:29.625 fio-3.35 00:41:29.625 Starting 3 threads 00:41:29.625 EAL: No free 2048 kB hugepages reported on node 1 00:41:41.854 00:41:41.854 filename0: (groupid=0, jobs=1): err= 0: pid=3930009: Mon Jul 22 20:48:52 2024 00:41:41.854 read: IOPS=187, BW=23.4MiB/s (24.5MB/s)(235MiB/10050msec) 00:41:41.854 slat (nsec): min=6429, max=47019, avg=10067.31, stdev=1916.86 00:41:41.854 clat (usec): min=9168, max=97200, avg=15981.41, stdev=6177.38 00:41:41.854 lat (usec): min=9177, max=97209, avg=15991.48, stdev=6177.43 00:41:41.854 clat percentiles (usec): 00:41:41.854 | 1.00th=[10552], 5.00th=[11731], 10.00th=[12780], 20.00th=[13960], 00:41:41.854 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15401], 60.00th=[15795], 00:41:41.854 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17171], 95.00th=[17957], 00:41:41.854 | 99.00th=[56361], 99.50th=[57410], 99.90th=[58983], 99.95th=[96994], 00:41:41.854 | 99.99th=[96994] 00:41:41.854 bw ( KiB/s): min=18432, max=26112, per=33.26%, avg=24064.00, stdev=1823.48, samples=20 00:41:41.854 iops : min= 144, max= 204, avg=188.00, stdev=14.25, samples=20 00:41:41.854 lat (msec) : 10=0.43%, 20=97.24%, 50=0.37%, 100=1.97% 00:41:41.854 cpu : usr=95.03%, sys=4.66%, ctx=20, majf=0, minf=1638 00:41:41.854 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:41.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.854 issued rwts: total=1882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:41.854 filename0: (groupid=0, jobs=1): err= 0: pid=3930010: Mon Jul 22 20:48:52 2024 00:41:41.854 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(240MiB/10048msec) 00:41:41.854 slat (nsec): min=6438, max=46549, avg=10420.34, stdev=1816.02 00:41:41.854 clat (usec): min=8222, max=97344, avg=15686.85, stdev=5624.06 00:41:41.854 lat (usec): min=8233, max=97357, avg=15697.27, stdev=5624.03 00:41:41.854 clat percentiles (usec): 00:41:41.854 | 1.00th=[10159], 5.00th=[11469], 10.00th=[12649], 20.00th=[13829], 00:41:41.854 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15270], 60.00th=[15664], 00:41:41.854 | 70.00th=[16057], 80.00th=[16450], 90.00th=[17171], 95.00th=[17957], 00:41:41.854 | 99.00th=[55837], 99.50th=[56361], 99.90th=[96994], 99.95th=[96994], 00:41:41.854 | 99.99th=[96994] 00:41:41.854 bw ( KiB/s): min=20736, max=26880, per=33.88%, avg=24512.00, stdev=1564.65, samples=20 00:41:41.854 iops : min= 162, max= 210, avg=191.50, stdev=12.22, samples=20 00:41:41.854 lat (msec) : 10=0.94%, 20=97.50%, 50=0.21%, 100=1.36% 00:41:41.854 cpu : usr=95.26%, sys=4.43%, ctx=14, majf=0, minf=1637 00:41:41.854 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:41.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.854 issued rwts: total=1917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:41.854 filename0: (groupid=0, jobs=1): err= 0: pid=3930011: Mon Jul 22 20:48:52 2024 00:41:41.854 read: IOPS=187, BW=23.4MiB/s (24.5MB/s)(235MiB/10049msec) 00:41:41.854 slat (nsec): min=6438, max=47328, avg=10372.21, stdev=1820.78 00:41:41.854 clat (usec): min=7711, max=59379, avg=15990.97, stdev=5875.62 00:41:41.854 lat (usec): min=7721, max=59388, avg=16001.34, stdev=5875.74 00:41:41.854 clat percentiles (usec): 00:41:41.854 | 1.00th=[ 9765], 5.00th=[11469], 10.00th=[12649], 20.00th=[13960], 00:41:41.854 | 30.00th=[14746], 40.00th=[15270], 50.00th=[15533], 60.00th=[15926], 00:41:41.854 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17433], 95.00th=[18220], 00:41:41.854 | 99.00th=[56886], 99.50th=[57934], 99.90th=[58459], 99.95th=[59507], 00:41:41.854 | 99.99th=[59507] 00:41:41.854 bw ( KiB/s): min=20992, max=26624, per=33.25%, avg=24051.20, stdev=1537.07, samples=20 00:41:41.854 iops : min= 164, max= 208, avg=187.90, stdev=12.01, samples=20 00:41:41.854 lat (msec) : 10=1.33%, 20=96.65%, 50=0.16%, 100=1.86% 00:41:41.854 cpu : usr=95.01%, sys=4.68%, ctx=20, majf=0, minf=1634 00:41:41.854 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:41.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.854 issued rwts: total=1881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:41.854 00:41:41.854 Run status group 0 (all jobs): 00:41:41.855 READ: bw=70.6MiB/s (74.1MB/s), 23.4MiB/s-23.8MiB/s (24.5MB/s-25.0MB/s), io=710MiB (744MB), run=10048-10050msec 00:41:41.855 ----------------------------------------------------- 00:41:41.855 Suppressions used: 00:41:41.855 count bytes template 00:41:41.855 5 44 /usr/src/fio/parse.c 00:41:41.855 1 8 libtcmalloc_minimal.so 00:41:41.855 1 904 libcrypto.so 00:41:41.855 ----------------------------------------------------- 00:41:41.855 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:41.855 00:41:41.855 real 0m12.191s 00:41:41.855 user 0m41.615s 00:41:41.855 sys 0m1.938s 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:41.855 20:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:41.855 ************************************ 00:41:41.855 END TEST fio_dif_digest 00:41:41.855 ************************************ 00:41:41.855 20:48:53 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:41:41.855 20:48:53 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:41.855 20:48:53 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:41.855 20:48:53 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:41.855 20:48:53 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:41:41.855 20:48:53 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:41.855 20:48:53 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:41:41.855 20:48:53 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:41.855 20:48:53 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:41.855 rmmod nvme_tcp 00:41:41.855 rmmod nvme_fabrics 00:41:41.855 rmmod nvme_keyring 00:41:41.855 20:48:53 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:41.855 20:48:53 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:41:41.855 20:48:53 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:41:41.855 20:48:53 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3918504 ']' 00:41:41.855 20:48:53 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3918504 00:41:41.855 20:48:53 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3918504 ']' 00:41:41.855 20:48:53 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3918504 00:41:41.855 20:48:53 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:41:41.855 20:48:53 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:41.855 20:48:53 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3918504 00:41:41.855 20:48:53 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:41.855 20:48:53 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:41.855 20:48:53 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3918504' 00:41:41.855 killing process with pid 3918504 00:41:41.855 20:48:53 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3918504 00:41:41.855 20:48:53 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3918504 00:41:42.426 20:48:54 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:41:42.426 20:48:54 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:45.730 Waiting for block devices as requested 00:41:45.730 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:45.730 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:45.730 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:45.730 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:45.730 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:45.730 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:45.991 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:45.991 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:45.991 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:46.253 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:46.253 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:46.253 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:46.513 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:46.514 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:46.514 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:46.514 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:46.774 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:47.035 20:48:58 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:47.035 20:48:58 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:47.035 20:48:58 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:47.035 20:48:58 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:47.035 20:48:58 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:47.035 20:48:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:47.035 20:48:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:48.951 20:49:00 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:48.951 00:41:48.951 real 1m22.649s 00:41:48.951 user 8m10.485s 00:41:48.951 sys 0m21.126s 00:41:48.951 20:49:00 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:48.951 20:49:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:48.951 ************************************ 00:41:48.951 END TEST nvmf_dif 00:41:48.951 ************************************ 00:41:48.951 20:49:00 -- common/autotest_common.sh@1142 -- # return 0 00:41:48.951 20:49:00 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:48.951 20:49:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:48.951 20:49:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:48.951 20:49:00 -- common/autotest_common.sh@10 -- # set +x 00:41:49.213 ************************************ 00:41:49.213 START TEST nvmf_abort_qd_sizes 00:41:49.213 ************************************ 00:41:49.213 20:49:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:49.213 * Looking for test storage... 00:41:49.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:49.213 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:49.214 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:49.214 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:49.214 20:49:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:49.214 20:49:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:49.214 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:49.214 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:49.214 20:49:01 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:41:49.214 20:49:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:55.805 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:55.805 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:55.805 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:55.805 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:55.805 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:55.806 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:55.806 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:55.806 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:55.806 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:55.806 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:55.806 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:55.806 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:55.806 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:56.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:56.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:41:56.067 00:41:56.067 --- 10.0.0.2 ping statistics --- 00:41:56.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:56.067 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:56.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:56.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:41:56.067 00:41:56.067 --- 10.0.0.1 ping statistics --- 00:41:56.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:56.067 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:41:56.067 20:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:59.384 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:59.384 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:59.384 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:59.384 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:59.384 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:59.384 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:59.384 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:59.384 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:59.384 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:59.644 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:59.644 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:59.644 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:59.644 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:59.644 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:59.644 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:59.644 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:59.644 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3939580 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3939580 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3939580 ']' 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:59.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:59.904 20:49:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:00.163 [2024-07-22 20:49:11.997753] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:00.163 [2024-07-22 20:49:11.997851] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:00.163 EAL: No free 2048 kB hugepages reported on node 1 00:42:00.163 [2024-07-22 20:49:12.122909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:00.423 [2024-07-22 20:49:12.304940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:00.423 [2024-07-22 20:49:12.304984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:00.423 [2024-07-22 20:49:12.304998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:00.423 [2024-07-22 20:49:12.305008] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:00.423 [2024-07-22 20:49:12.305019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:00.423 [2024-07-22 20:49:12.305220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:00.423 [2024-07-22 20:49:12.305286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:42:00.423 [2024-07-22 20:49:12.305612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:00.423 [2024-07-22 20:49:12.305635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:00.994 20:49:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:00.994 ************************************ 00:42:00.994 START TEST spdk_target_abort 00:42:00.994 ************************************ 00:42:00.994 20:49:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:42:00.994 20:49:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:00.994 20:49:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:42:00.994 20:49:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:00.994 20:49:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.255 spdk_targetn1 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.255 [2024-07-22 20:49:13.199019] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.255 [2024-07-22 20:49:13.239569] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:01.255 20:49:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:01.524 EAL: No free 2048 kB hugepages reported on node 1 00:42:01.524 [2024-07-22 20:49:13.481422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:248 len:8 PRP1 0x2000078c1000 PRP2 0x0 00:42:01.524 [2024-07-22 20:49:13.481456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0022 p:1 m:0 dnr:0 00:42:01.524 [2024-07-22 20:49:13.482465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:304 len:8 PRP1 0x2000078c3000 PRP2 0x0 00:42:01.524 [2024-07-22 20:49:13.482487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0029 p:1 m:0 dnr:0 00:42:01.524 [2024-07-22 20:49:13.488692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:440 len:8 PRP1 0x2000078bd000 PRP2 0x0 00:42:01.524 [2024-07-22 20:49:13.488714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0038 p:1 m:0 dnr:0 00:42:01.524 [2024-07-22 20:49:13.489676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:488 len:8 PRP1 0x2000078c1000 PRP2 0x0 00:42:01.524 [2024-07-22 20:49:13.489696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:003f p:1 m:0 dnr:0 00:42:01.524 [2024-07-22 20:49:13.504559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:904 len:8 PRP1 0x2000078c1000 PRP2 0x0 00:42:01.524 [2024-07-22 20:49:13.504584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0073 p:1 m:0 dnr:0 00:42:01.524 [2024-07-22 20:49:13.528877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1704 len:8 PRP1 0x2000078bd000 PRP2 0x0 00:42:01.524 [2024-07-22 20:49:13.528900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00d6 p:1 m:0 dnr:0 00:42:01.524 [2024-07-22 20:49:13.529282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1728 len:8 PRP1 0x2000078c1000 PRP2 0x0 00:42:01.524 [2024-07-22 20:49:13.529299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00d9 p:1 m:0 dnr:0 00:42:01.524 [2024-07-22 20:49:13.534743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1832 len:8 PRP1 0x2000078bd000 PRP2 0x0 00:42:01.524 [2024-07-22 20:49:13.534763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00e6 p:1 m:0 dnr:0 00:42:01.784 [2024-07-22 20:49:13.558754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2552 len:8 PRP1 0x2000078bd000 PRP2 0x0 00:42:01.784 [2024-07-22 20:49:13.558776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:01.784 [2024-07-22 20:49:13.581700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3312 len:8 PRP1 0x2000078c1000 PRP2 0x0 00:42:01.784 [2024-07-22 20:49:13.581721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00a1 p:0 m:0 dnr:0 00:42:01.784 [2024-07-22 20:49:13.605770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:4040 len:8 PRP1 0x2000078c5000 PRP2 0x0 00:42:01.784 [2024-07-22 20:49:13.605791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:42:05.132 Initializing NVMe Controllers 00:42:05.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:05.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:05.132 Initialization complete. Launching workers. 00:42:05.132 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11202, failed: 11 00:42:05.132 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3377, failed to submit 7836 00:42:05.132 success 762, unsuccess 2615, failed 0 00:42:05.132 20:49:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:05.132 20:49:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:05.132 EAL: No free 2048 kB hugepages reported on node 1 00:42:05.132 [2024-07-22 20:49:16.759631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:1056 len:8 PRP1 0x200007c47000 PRP2 0x0 00:42:05.132 [2024-07-22 20:49:16.759681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0088 p:1 m:0 dnr:0 00:42:05.132 [2024-07-22 20:49:16.783461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:1576 len:8 PRP1 0x200007c3f000 PRP2 0x0 00:42:05.132 [2024-07-22 20:49:16.783497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00cc p:1 m:0 dnr:0 00:42:05.132 [2024-07-22 20:49:16.799588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:2024 len:8 PRP1 0x200007c47000 PRP2 0x0 00:42:05.132 [2024-07-22 20:49:16.799618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:42:05.132 [2024-07-22 20:49:16.850681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:3104 len:8 PRP1 0x200007c4b000 PRP2 0x0 00:42:05.132 [2024-07-22 20:49:16.850714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0086 p:0 m:0 dnr:0 00:42:08.435 Initializing NVMe Controllers 00:42:08.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:08.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:08.435 Initialization complete. Launching workers. 00:42:08.435 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8703, failed: 4 00:42:08.435 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1251, failed to submit 7456 00:42:08.435 success 293, unsuccess 958, failed 0 00:42:08.435 20:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:08.435 20:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:08.435 EAL: No free 2048 kB hugepages reported on node 1 00:42:11.735 Initializing NVMe Controllers 00:42:11.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:11.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:11.735 Initialization complete. Launching workers. 00:42:11.735 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37704, failed: 0 00:42:11.735 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2635, failed to submit 35069 00:42:11.735 success 572, unsuccess 2063, failed 0 00:42:11.735 20:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:11.735 20:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.735 20:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:11.735 20:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.735 20:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:11.735 20:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.735 20:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3939580 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3939580 ']' 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3939580 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3939580 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3939580' 00:42:13.648 killing process with pid 3939580 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3939580 00:42:13.648 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3939580 00:42:14.221 00:42:14.221 real 0m13.123s 00:42:14.221 user 0m51.858s 00:42:14.221 sys 0m2.104s 00:42:14.221 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:14.221 20:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:14.221 ************************************ 00:42:14.221 END TEST spdk_target_abort 00:42:14.221 ************************************ 00:42:14.221 20:49:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:42:14.221 20:49:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:14.221 20:49:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:14.221 20:49:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:14.221 20:49:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:14.221 ************************************ 00:42:14.221 START TEST kernel_target_abort 00:42:14.221 ************************************ 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:14.221 20:49:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:16.766 Waiting for block devices as requested 00:42:16.766 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:16.766 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:17.027 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:17.027 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:17.027 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:17.027 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:17.287 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:17.287 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:17.287 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:17.548 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:17.548 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:17.548 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:17.808 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:17.808 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:17.808 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:18.069 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:18.069 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:19.011 No valid GPT data, bailing 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:19.011 20:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:42:19.011 00:42:19.011 Discovery Log Number of Records 2, Generation counter 2 00:42:19.011 =====Discovery Log Entry 0====== 00:42:19.011 trtype: tcp 00:42:19.011 adrfam: ipv4 00:42:19.011 subtype: current discovery subsystem 00:42:19.011 treq: not specified, sq flow control disable supported 00:42:19.011 portid: 1 00:42:19.011 trsvcid: 4420 00:42:19.011 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:19.011 traddr: 10.0.0.1 00:42:19.011 eflags: none 00:42:19.011 sectype: none 00:42:19.011 =====Discovery Log Entry 1====== 00:42:19.011 trtype: tcp 00:42:19.011 adrfam: ipv4 00:42:19.011 subtype: nvme subsystem 00:42:19.011 treq: not specified, sq flow control disable supported 00:42:19.011 portid: 1 00:42:19.011 trsvcid: 4420 00:42:19.011 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:19.011 traddr: 10.0.0.1 00:42:19.011 eflags: none 00:42:19.011 sectype: none 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:19.272 20:49:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:19.272 EAL: No free 2048 kB hugepages reported on node 1 00:42:22.574 Initializing NVMe Controllers 00:42:22.574 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:22.574 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:22.574 Initialization complete. Launching workers. 00:42:22.574 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 49489, failed: 0 00:42:22.574 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 49489, failed to submit 0 00:42:22.574 success 0, unsuccess 49489, failed 0 00:42:22.574 20:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:22.574 20:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:22.574 EAL: No free 2048 kB hugepages reported on node 1 00:42:25.874 Initializing NVMe Controllers 00:42:25.874 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:25.874 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:25.874 Initialization complete. Launching workers. 00:42:25.874 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87308, failed: 0 00:42:25.874 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21998, failed to submit 65310 00:42:25.874 success 0, unsuccess 21998, failed 0 00:42:25.874 20:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:25.874 20:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:25.874 EAL: No free 2048 kB hugepages reported on node 1 00:42:29.176 Initializing NVMe Controllers 00:42:29.176 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:29.176 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:29.176 Initialization complete. Launching workers. 00:42:29.176 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83897, failed: 0 00:42:29.176 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20950, failed to submit 62947 00:42:29.176 success 0, unsuccess 20950, failed 0 00:42:29.176 20:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:29.176 20:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:29.176 20:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:42:29.176 20:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:29.176 20:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:29.176 20:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:29.176 20:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:29.176 20:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:42:29.176 20:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:42:29.176 20:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:31.724 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:31.724 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:31.724 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:31.724 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:31.724 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:31.724 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:31.724 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:31.724 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:31.984 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:31.984 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:31.984 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:31.984 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:31.984 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:31.984 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:31.984 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:31.984 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:33.898 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:33.898 00:42:33.898 real 0m19.860s 00:42:33.898 user 0m8.640s 00:42:33.898 sys 0m6.240s 00:42:33.898 20:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:33.898 20:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:33.898 ************************************ 00:42:33.898 END TEST kernel_target_abort 00:42:33.898 ************************************ 00:42:34.166 20:49:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:42:34.167 20:49:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:34.167 20:49:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:34.167 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:34.167 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:42:34.167 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:34.167 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:42:34.167 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:34.167 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:34.167 rmmod nvme_tcp 00:42:34.167 rmmod nvme_fabrics 00:42:34.167 rmmod nvme_keyring 00:42:34.167 20:49:46 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:34.167 20:49:46 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:42:34.167 20:49:46 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:42:34.167 20:49:46 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3939580 ']' 00:42:34.167 20:49:46 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3939580 00:42:34.167 20:49:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3939580 ']' 00:42:34.167 20:49:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3939580 00:42:34.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3939580) - No such process 00:42:34.167 20:49:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3939580 is not found' 00:42:34.167 Process with pid 3939580 is not found 00:42:34.167 20:49:46 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:42:34.167 20:49:46 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:37.588 Waiting for block devices as requested 00:42:37.588 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:37.588 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:37.589 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:37.589 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:37.589 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:37.849 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:37.849 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:37.849 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:38.110 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:38.110 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:38.371 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:38.371 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:38.371 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:38.632 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:38.632 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:38.632 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:38.632 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:38.894 20:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:38.894 20:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:38.894 20:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:38.894 20:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:38.894 20:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:38.894 20:49:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:38.894 20:49:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:41.443 20:49:52 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:41.443 00:42:41.443 real 0m51.938s 00:42:41.443 user 1m5.595s 00:42:41.443 sys 0m18.808s 00:42:41.443 20:49:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:41.443 20:49:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:41.443 ************************************ 00:42:41.443 END TEST nvmf_abort_qd_sizes 00:42:41.443 ************************************ 00:42:41.443 20:49:52 -- common/autotest_common.sh@1142 -- # return 0 00:42:41.444 20:49:52 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:41.444 20:49:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:41.444 20:49:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:41.444 20:49:52 -- common/autotest_common.sh@10 -- # set +x 00:42:41.444 ************************************ 00:42:41.444 START TEST keyring_file 00:42:41.444 ************************************ 00:42:41.444 20:49:53 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:41.444 * Looking for test storage... 00:42:41.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:41.444 20:49:53 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:41.444 20:49:53 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:41.444 20:49:53 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:41.444 20:49:53 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:41.444 20:49:53 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:41.444 20:49:53 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:41.444 20:49:53 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:41.444 20:49:53 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:41.444 20:49:53 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:41.444 20:49:53 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@47 -- # : 0 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:41.444 20:49:53 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:41.444 20:49:53 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:41.444 20:49:53 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:41.444 20:49:53 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:41.444 20:49:53 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:41.444 20:49:53 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:41.444 20:49:53 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:41.444 20:49:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:41.444 20:49:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:41.444 20:49:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:41.444 20:49:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:41.444 20:49:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:41.444 20:49:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.c2bsTrkrjz 00:42:41.444 20:49:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:42:41.444 20:49:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:42:41.444 20:49:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.c2bsTrkrjz 00:42:41.444 20:49:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.c2bsTrkrjz 00:42:41.445 20:49:53 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.c2bsTrkrjz 00:42:41.445 20:49:53 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:41.445 20:49:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:41.445 20:49:53 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:41.445 20:49:53 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:41.445 20:49:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:41.445 20:49:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:41.445 20:49:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.f4hMsmPQKu 00:42:41.445 20:49:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:41.445 20:49:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:41.445 20:49:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:42:41.445 20:49:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:41.445 20:49:53 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:42:41.445 20:49:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:42:41.445 20:49:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:42:41.445 20:49:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.f4hMsmPQKu 00:42:41.445 20:49:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.f4hMsmPQKu 00:42:41.445 20:49:53 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.f4hMsmPQKu 00:42:41.445 20:49:53 keyring_file -- keyring/file.sh@30 -- # tgtpid=3950129 00:42:41.445 20:49:53 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3950129 00:42:41.445 20:49:53 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:41.445 20:49:53 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3950129 ']' 00:42:41.445 20:49:53 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:41.445 20:49:53 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:41.445 20:49:53 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:41.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:41.445 20:49:53 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:41.445 20:49:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:41.445 [2024-07-22 20:49:53.366800] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:41.445 [2024-07-22 20:49:53.366939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950129 ] 00:42:41.445 EAL: No free 2048 kB hugepages reported on node 1 00:42:41.709 [2024-07-22 20:49:53.493548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:41.709 [2024-07-22 20:49:53.675786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.280 20:49:54 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:42.280 20:49:54 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:42:42.280 20:49:54 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:42.280 20:49:54 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.280 20:49:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:42.280 [2024-07-22 20:49:54.252156] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:42.280 null0 00:42:42.280 [2024-07-22 20:49:54.284198] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:42.280 [2024-07-22 20:49:54.284593] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:42.280 [2024-07-22 20:49:54.292222] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:42:42.280 20:49:54 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.280 20:49:54 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:42.280 20:49:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:42.280 20:49:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:42.280 20:49:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:42:42.281 20:49:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:42.281 20:49:54 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:42:42.281 20:49:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:42.281 20:49:54 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:42.281 20:49:54 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.541 20:49:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:42.541 [2024-07-22 20:49:54.308269] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:42.541 request: 00:42:42.541 { 00:42:42.541 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:42.541 "secure_channel": false, 00:42:42.541 "listen_address": { 00:42:42.541 "trtype": "tcp", 00:42:42.541 "traddr": "127.0.0.1", 00:42:42.541 "trsvcid": "4420" 00:42:42.541 }, 00:42:42.541 "method": "nvmf_subsystem_add_listener", 00:42:42.541 "req_id": 1 00:42:42.541 } 00:42:42.541 Got JSON-RPC error response 00:42:42.541 response: 00:42:42.541 { 00:42:42.541 "code": -32602, 00:42:42.542 "message": "Invalid parameters" 00:42:42.542 } 00:42:42.542 20:49:54 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:42:42.542 20:49:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:42.542 20:49:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:42.542 20:49:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:42.542 20:49:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:42.542 20:49:54 keyring_file -- keyring/file.sh@46 -- # bperfpid=3950329 00:42:42.542 20:49:54 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3950329 /var/tmp/bperf.sock 00:42:42.542 20:49:54 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3950329 ']' 00:42:42.542 20:49:54 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:42.542 20:49:54 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:42.542 20:49:54 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:42.542 20:49:54 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:42.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:42.542 20:49:54 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:42.542 20:49:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:42.542 [2024-07-22 20:49:54.401919] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:42.542 [2024-07-22 20:49:54.402021] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950329 ] 00:42:42.542 EAL: No free 2048 kB hugepages reported on node 1 00:42:42.542 [2024-07-22 20:49:54.527510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.803 [2024-07-22 20:49:54.702726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:43.375 20:49:55 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:43.375 20:49:55 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:42:43.375 20:49:55 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.c2bsTrkrjz 00:42:43.375 20:49:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.c2bsTrkrjz 00:42:43.375 20:49:55 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.f4hMsmPQKu 00:42:43.375 20:49:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.f4hMsmPQKu 00:42:43.636 20:49:55 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:42:43.636 20:49:55 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:42:43.636 20:49:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.636 20:49:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:43.636 20:49:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.636 20:49:55 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.c2bsTrkrjz == \/\t\m\p\/\t\m\p\.\c\2\b\s\T\r\k\r\j\z ]] 00:42:43.636 20:49:55 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:42:43.636 20:49:55 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:43.636 20:49:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.636 20:49:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.636 20:49:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:43.901 20:49:55 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.f4hMsmPQKu == \/\t\m\p\/\t\m\p\.\f\4\h\M\s\m\P\Q\K\u ]] 00:42:43.901 20:49:55 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:42:43.901 20:49:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:43.901 20:49:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:43.901 20:49:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.901 20:49:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:43.901 20:49:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.901 20:49:55 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:42:43.901 20:49:55 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:42:43.901 20:49:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:43.901 20:49:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:43.901 20:49:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.901 20:49:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.901 20:49:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:44.163 20:49:56 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:44.164 20:49:56 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:44.164 20:49:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:44.424 [2024-07-22 20:49:56.191137] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:44.424 nvme0n1 00:42:44.424 20:49:56 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:42:44.424 20:49:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:44.424 20:49:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:44.424 20:49:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:44.424 20:49:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:44.424 20:49:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:44.686 20:49:56 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:42:44.686 20:49:56 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:42:44.686 20:49:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:44.686 20:49:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:44.686 20:49:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:44.686 20:49:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:44.686 20:49:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:44.686 20:49:56 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:42:44.686 20:49:56 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:44.686 Running I/O for 1 seconds... 00:42:46.073 00:42:46.073 Latency(us) 00:42:46.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:46.073 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:46.073 nvme0n1 : 1.01 8872.40 34.66 0.00 0.00 14320.15 9338.88 24139.09 00:42:46.073 =================================================================================================================== 00:42:46.073 Total : 8872.40 34.66 0.00 0.00 14320.15 9338.88 24139.09 00:42:46.073 0 00:42:46.073 20:49:57 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:46.073 20:49:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:46.073 20:49:57 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:42:46.073 20:49:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:46.073 20:49:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:46.073 20:49:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:46.073 20:49:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:46.073 20:49:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.073 20:49:58 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:42:46.073 20:49:58 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:42:46.073 20:49:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:46.073 20:49:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:46.073 20:49:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:46.073 20:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.073 20:49:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:46.334 20:49:58 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:46.334 20:49:58 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:46.334 20:49:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:46.334 20:49:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:46.334 20:49:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:46.334 20:49:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:46.334 20:49:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:46.334 20:49:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:46.334 20:49:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:46.334 20:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:46.597 [2024-07-22 20:49:58.373943] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:46.597 [2024-07-22 20:49:58.374305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d680 (107): Transport endpoint is not connected 00:42:46.597 [2024-07-22 20:49:58.375292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d680 (9): Bad file descriptor 00:42:46.597 [2024-07-22 20:49:58.376289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:46.597 [2024-07-22 20:49:58.376310] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:46.597 [2024-07-22 20:49:58.376318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:46.597 request: 00:42:46.597 { 00:42:46.597 "name": "nvme0", 00:42:46.597 "trtype": "tcp", 00:42:46.597 "traddr": "127.0.0.1", 00:42:46.597 "adrfam": "ipv4", 00:42:46.597 "trsvcid": "4420", 00:42:46.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:46.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:46.597 "prchk_reftag": false, 00:42:46.597 "prchk_guard": false, 00:42:46.597 "hdgst": false, 00:42:46.597 "ddgst": false, 00:42:46.597 "psk": "key1", 00:42:46.597 "method": "bdev_nvme_attach_controller", 00:42:46.597 "req_id": 1 00:42:46.597 } 00:42:46.597 Got JSON-RPC error response 00:42:46.597 response: 00:42:46.597 { 00:42:46.597 "code": -5, 00:42:46.597 "message": "Input/output error" 00:42:46.597 } 00:42:46.597 20:49:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:46.597 20:49:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:46.597 20:49:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:46.597 20:49:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:46.597 20:49:58 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:42:46.597 20:49:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:46.597 20:49:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:46.597 20:49:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:46.597 20:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.597 20:49:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:46.597 20:49:58 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:42:46.597 20:49:58 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:42:46.597 20:49:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:46.597 20:49:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:46.597 20:49:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:46.597 20:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.597 20:49:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:46.859 20:49:58 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:46.859 20:49:58 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:42:46.859 20:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:46.859 20:49:58 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:42:46.859 20:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:47.120 20:49:59 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:42:47.120 20:49:59 keyring_file -- keyring/file.sh@77 -- # jq length 00:42:47.120 20:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:47.382 20:49:59 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:42:47.382 20:49:59 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.c2bsTrkrjz 00:42:47.382 20:49:59 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.c2bsTrkrjz 00:42:47.382 20:49:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:47.382 20:49:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.c2bsTrkrjz 00:42:47.382 20:49:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:47.382 20:49:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:47.382 20:49:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:47.382 20:49:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:47.382 20:49:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.c2bsTrkrjz 00:42:47.382 20:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.c2bsTrkrjz 00:42:47.382 [2024-07-22 20:49:59.320621] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.c2bsTrkrjz': 0100660 00:42:47.382 [2024-07-22 20:49:59.320651] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:47.382 request: 00:42:47.382 { 00:42:47.382 "name": "key0", 00:42:47.382 "path": "/tmp/tmp.c2bsTrkrjz", 00:42:47.382 "method": "keyring_file_add_key", 00:42:47.382 "req_id": 1 00:42:47.382 } 00:42:47.382 Got JSON-RPC error response 00:42:47.382 response: 00:42:47.382 { 00:42:47.382 "code": -1, 00:42:47.382 "message": "Operation not permitted" 00:42:47.382 } 00:42:47.382 20:49:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:47.382 20:49:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:47.382 20:49:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:47.382 20:49:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:47.382 20:49:59 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.c2bsTrkrjz 00:42:47.382 20:49:59 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.c2bsTrkrjz 00:42:47.382 20:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.c2bsTrkrjz 00:42:47.643 20:49:59 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.c2bsTrkrjz 00:42:47.643 20:49:59 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:42:47.643 20:49:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:47.643 20:49:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:47.643 20:49:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:47.643 20:49:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:47.643 20:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:47.643 20:49:59 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:42:47.643 20:49:59 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:47.643 20:49:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:42:47.643 20:49:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:47.643 20:49:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:47.643 20:49:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:47.643 20:49:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:47.643 20:49:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:47.643 20:49:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:47.643 20:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:47.905 [2024-07-22 20:49:59.801884] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.c2bsTrkrjz': No such file or directory 00:42:47.905 [2024-07-22 20:49:59.801913] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:47.905 [2024-07-22 20:49:59.801938] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:47.905 [2024-07-22 20:49:59.801946] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:47.905 [2024-07-22 20:49:59.801954] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:47.905 request: 00:42:47.905 { 00:42:47.905 "name": "nvme0", 00:42:47.905 "trtype": "tcp", 00:42:47.905 "traddr": "127.0.0.1", 00:42:47.905 "adrfam": "ipv4", 00:42:47.905 "trsvcid": "4420", 00:42:47.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:47.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:47.905 "prchk_reftag": false, 00:42:47.905 "prchk_guard": false, 00:42:47.905 "hdgst": false, 00:42:47.905 "ddgst": false, 00:42:47.905 "psk": "key0", 00:42:47.905 "method": "bdev_nvme_attach_controller", 00:42:47.905 "req_id": 1 00:42:47.905 } 00:42:47.905 Got JSON-RPC error response 00:42:47.905 response: 00:42:47.905 { 00:42:47.905 "code": -19, 00:42:47.905 "message": "No such device" 00:42:47.905 } 00:42:47.905 20:49:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:42:47.905 20:49:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:47.905 20:49:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:47.905 20:49:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:47.905 20:49:59 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:42:47.905 20:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:48.166 20:49:59 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:48.166 20:49:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:48.166 20:49:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:48.166 20:49:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:48.166 20:49:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:48.166 20:49:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:48.166 20:49:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wbgr8zA83t 00:42:48.166 20:49:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:48.166 20:49:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:48.166 20:49:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:42:48.166 20:49:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:48.166 20:49:59 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:42:48.166 20:49:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:42:48.166 20:49:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:42:48.166 20:50:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wbgr8zA83t 00:42:48.166 20:50:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wbgr8zA83t 00:42:48.166 20:50:00 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.wbgr8zA83t 00:42:48.166 20:50:00 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wbgr8zA83t 00:42:48.166 20:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wbgr8zA83t 00:42:48.166 20:50:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:48.166 20:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:48.427 nvme0n1 00:42:48.427 20:50:00 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:42:48.427 20:50:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:48.427 20:50:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:48.427 20:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:48.427 20:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:48.428 20:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:48.688 20:50:00 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:42:48.688 20:50:00 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:42:48.688 20:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:48.950 20:50:00 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:42:48.950 20:50:00 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:42:48.950 20:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:48.950 20:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:48.950 20:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:48.950 20:50:00 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:42:48.950 20:50:00 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:42:48.950 20:50:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:48.950 20:50:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:48.950 20:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:48.950 20:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:48.950 20:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.210 20:50:01 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:42:49.210 20:50:01 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:49.210 20:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:49.210 20:50:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:42:49.210 20:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.210 20:50:01 keyring_file -- keyring/file.sh@104 -- # jq length 00:42:49.471 20:50:01 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:42:49.471 20:50:01 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wbgr8zA83t 00:42:49.471 20:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wbgr8zA83t 00:42:49.732 20:50:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.f4hMsmPQKu 00:42:49.732 20:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.f4hMsmPQKu 00:42:49.732 20:50:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:49.732 20:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:49.993 nvme0n1 00:42:49.993 20:50:01 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:42:49.993 20:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:50.255 20:50:02 keyring_file -- keyring/file.sh@112 -- # config='{ 00:42:50.255 "subsystems": [ 00:42:50.255 { 00:42:50.255 "subsystem": "keyring", 00:42:50.255 "config": [ 00:42:50.255 { 00:42:50.255 "method": "keyring_file_add_key", 00:42:50.255 "params": { 00:42:50.255 "name": "key0", 00:42:50.255 "path": "/tmp/tmp.wbgr8zA83t" 00:42:50.255 } 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "method": "keyring_file_add_key", 00:42:50.255 "params": { 00:42:50.255 "name": "key1", 00:42:50.255 "path": "/tmp/tmp.f4hMsmPQKu" 00:42:50.255 } 00:42:50.255 } 00:42:50.255 ] 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "subsystem": "iobuf", 00:42:50.255 "config": [ 00:42:50.255 { 00:42:50.255 "method": "iobuf_set_options", 00:42:50.255 "params": { 00:42:50.255 "small_pool_count": 8192, 00:42:50.255 "large_pool_count": 1024, 00:42:50.255 "small_bufsize": 8192, 00:42:50.255 "large_bufsize": 135168 00:42:50.255 } 00:42:50.255 } 00:42:50.255 ] 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "subsystem": "sock", 00:42:50.255 "config": [ 00:42:50.255 { 00:42:50.255 "method": "sock_set_default_impl", 00:42:50.255 "params": { 00:42:50.255 "impl_name": "posix" 00:42:50.255 } 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "method": "sock_impl_set_options", 00:42:50.255 "params": { 00:42:50.255 "impl_name": "ssl", 00:42:50.255 "recv_buf_size": 4096, 00:42:50.255 "send_buf_size": 4096, 00:42:50.255 "enable_recv_pipe": true, 00:42:50.255 "enable_quickack": false, 00:42:50.255 "enable_placement_id": 0, 00:42:50.255 "enable_zerocopy_send_server": true, 00:42:50.255 "enable_zerocopy_send_client": false, 00:42:50.255 "zerocopy_threshold": 0, 00:42:50.255 "tls_version": 0, 00:42:50.255 "enable_ktls": false 00:42:50.255 } 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "method": "sock_impl_set_options", 00:42:50.255 "params": { 00:42:50.255 "impl_name": "posix", 00:42:50.255 "recv_buf_size": 2097152, 00:42:50.255 "send_buf_size": 2097152, 00:42:50.255 "enable_recv_pipe": true, 00:42:50.255 "enable_quickack": false, 00:42:50.255 "enable_placement_id": 0, 00:42:50.255 "enable_zerocopy_send_server": true, 00:42:50.255 "enable_zerocopy_send_client": false, 00:42:50.255 "zerocopy_threshold": 0, 00:42:50.255 "tls_version": 0, 00:42:50.255 "enable_ktls": false 00:42:50.255 } 00:42:50.255 } 00:42:50.255 ] 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "subsystem": "vmd", 00:42:50.255 "config": [] 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "subsystem": "accel", 00:42:50.255 "config": [ 00:42:50.255 { 00:42:50.255 "method": "accel_set_options", 00:42:50.255 "params": { 00:42:50.255 "small_cache_size": 128, 00:42:50.255 "large_cache_size": 16, 00:42:50.255 "task_count": 2048, 00:42:50.255 "sequence_count": 2048, 00:42:50.255 "buf_count": 2048 00:42:50.255 } 00:42:50.255 } 00:42:50.255 ] 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "subsystem": "bdev", 00:42:50.255 "config": [ 00:42:50.255 { 00:42:50.255 "method": "bdev_set_options", 00:42:50.255 "params": { 00:42:50.255 "bdev_io_pool_size": 65535, 00:42:50.255 "bdev_io_cache_size": 256, 00:42:50.255 "bdev_auto_examine": true, 00:42:50.255 "iobuf_small_cache_size": 128, 00:42:50.255 "iobuf_large_cache_size": 16 00:42:50.255 } 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "method": "bdev_raid_set_options", 00:42:50.255 "params": { 00:42:50.255 "process_window_size_kb": 1024, 00:42:50.255 "process_max_bandwidth_mb_sec": 0 00:42:50.255 } 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "method": "bdev_iscsi_set_options", 00:42:50.255 "params": { 00:42:50.255 "timeout_sec": 30 00:42:50.255 } 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "method": "bdev_nvme_set_options", 00:42:50.255 "params": { 00:42:50.255 "action_on_timeout": "none", 00:42:50.255 "timeout_us": 0, 00:42:50.255 "timeout_admin_us": 0, 00:42:50.255 "keep_alive_timeout_ms": 10000, 00:42:50.255 "arbitration_burst": 0, 00:42:50.255 "low_priority_weight": 0, 00:42:50.255 "medium_priority_weight": 0, 00:42:50.255 "high_priority_weight": 0, 00:42:50.255 "nvme_adminq_poll_period_us": 10000, 00:42:50.255 "nvme_ioq_poll_period_us": 0, 00:42:50.255 "io_queue_requests": 512, 00:42:50.255 "delay_cmd_submit": true, 00:42:50.255 "transport_retry_count": 4, 00:42:50.255 "bdev_retry_count": 3, 00:42:50.255 "transport_ack_timeout": 0, 00:42:50.255 "ctrlr_loss_timeout_sec": 0, 00:42:50.255 "reconnect_delay_sec": 0, 00:42:50.255 "fast_io_fail_timeout_sec": 0, 00:42:50.255 "disable_auto_failback": false, 00:42:50.255 "generate_uuids": false, 00:42:50.255 "transport_tos": 0, 00:42:50.255 "nvme_error_stat": false, 00:42:50.255 "rdma_srq_size": 0, 00:42:50.255 "io_path_stat": false, 00:42:50.255 "allow_accel_sequence": false, 00:42:50.255 "rdma_max_cq_size": 0, 00:42:50.255 "rdma_cm_event_timeout_ms": 0, 00:42:50.255 "dhchap_digests": [ 00:42:50.255 "sha256", 00:42:50.255 "sha384", 00:42:50.255 "sha512" 00:42:50.255 ], 00:42:50.255 "dhchap_dhgroups": [ 00:42:50.255 "null", 00:42:50.255 "ffdhe2048", 00:42:50.255 "ffdhe3072", 00:42:50.255 "ffdhe4096", 00:42:50.255 "ffdhe6144", 00:42:50.255 "ffdhe8192" 00:42:50.255 ] 00:42:50.255 } 00:42:50.255 }, 00:42:50.255 { 00:42:50.255 "method": "bdev_nvme_attach_controller", 00:42:50.255 "params": { 00:42:50.255 "name": "nvme0", 00:42:50.255 "trtype": "TCP", 00:42:50.256 "adrfam": "IPv4", 00:42:50.256 "traddr": "127.0.0.1", 00:42:50.256 "trsvcid": "4420", 00:42:50.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:50.256 "prchk_reftag": false, 00:42:50.256 "prchk_guard": false, 00:42:50.256 "ctrlr_loss_timeout_sec": 0, 00:42:50.256 "reconnect_delay_sec": 0, 00:42:50.256 "fast_io_fail_timeout_sec": 0, 00:42:50.256 "psk": "key0", 00:42:50.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:50.256 "hdgst": false, 00:42:50.256 "ddgst": false 00:42:50.256 } 00:42:50.256 }, 00:42:50.256 { 00:42:50.256 "method": "bdev_nvme_set_hotplug", 00:42:50.256 "params": { 00:42:50.256 "period_us": 100000, 00:42:50.256 "enable": false 00:42:50.256 } 00:42:50.256 }, 00:42:50.256 { 00:42:50.256 "method": "bdev_wait_for_examine" 00:42:50.256 } 00:42:50.256 ] 00:42:50.256 }, 00:42:50.256 { 00:42:50.256 "subsystem": "nbd", 00:42:50.256 "config": [] 00:42:50.256 } 00:42:50.256 ] 00:42:50.256 }' 00:42:50.256 20:50:02 keyring_file -- keyring/file.sh@114 -- # killprocess 3950329 00:42:50.256 20:50:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3950329 ']' 00:42:50.256 20:50:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3950329 00:42:50.256 20:50:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:42:50.256 20:50:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:50.256 20:50:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3950329 00:42:50.256 20:50:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:50.256 20:50:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:50.256 20:50:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3950329' 00:42:50.256 killing process with pid 3950329 00:42:50.256 20:50:02 keyring_file -- common/autotest_common.sh@967 -- # kill 3950329 00:42:50.256 Received shutdown signal, test time was about 1.000000 seconds 00:42:50.256 00:42:50.256 Latency(us) 00:42:50.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:50.256 =================================================================================================================== 00:42:50.256 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:50.256 20:50:02 keyring_file -- common/autotest_common.sh@972 -- # wait 3950329 00:42:50.828 20:50:02 keyring_file -- keyring/file.sh@117 -- # bperfpid=3952104 00:42:50.828 20:50:02 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3952104 /var/tmp/bperf.sock 00:42:50.828 20:50:02 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3952104 ']' 00:42:50.828 20:50:02 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:50.828 20:50:02 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:50.828 20:50:02 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:50.828 20:50:02 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:50.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:50.828 20:50:02 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:50.828 20:50:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:50.828 20:50:02 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:42:50.828 "subsystems": [ 00:42:50.828 { 00:42:50.828 "subsystem": "keyring", 00:42:50.828 "config": [ 00:42:50.828 { 00:42:50.828 "method": "keyring_file_add_key", 00:42:50.828 "params": { 00:42:50.828 "name": "key0", 00:42:50.828 "path": "/tmp/tmp.wbgr8zA83t" 00:42:50.828 } 00:42:50.828 }, 00:42:50.828 { 00:42:50.828 "method": "keyring_file_add_key", 00:42:50.828 "params": { 00:42:50.828 "name": "key1", 00:42:50.828 "path": "/tmp/tmp.f4hMsmPQKu" 00:42:50.828 } 00:42:50.828 } 00:42:50.828 ] 00:42:50.828 }, 00:42:50.828 { 00:42:50.828 "subsystem": "iobuf", 00:42:50.828 "config": [ 00:42:50.828 { 00:42:50.828 "method": "iobuf_set_options", 00:42:50.828 "params": { 00:42:50.828 "small_pool_count": 8192, 00:42:50.828 "large_pool_count": 1024, 00:42:50.828 "small_bufsize": 8192, 00:42:50.828 "large_bufsize": 135168 00:42:50.828 } 00:42:50.828 } 00:42:50.828 ] 00:42:50.828 }, 00:42:50.828 { 00:42:50.828 "subsystem": "sock", 00:42:50.828 "config": [ 00:42:50.828 { 00:42:50.828 "method": "sock_set_default_impl", 00:42:50.828 "params": { 00:42:50.828 "impl_name": "posix" 00:42:50.828 } 00:42:50.828 }, 00:42:50.828 { 00:42:50.828 "method": "sock_impl_set_options", 00:42:50.828 "params": { 00:42:50.828 "impl_name": "ssl", 00:42:50.828 "recv_buf_size": 4096, 00:42:50.828 "send_buf_size": 4096, 00:42:50.828 "enable_recv_pipe": true, 00:42:50.828 "enable_quickack": false, 00:42:50.828 "enable_placement_id": 0, 00:42:50.828 "enable_zerocopy_send_server": true, 00:42:50.828 "enable_zerocopy_send_client": false, 00:42:50.828 "zerocopy_threshold": 0, 00:42:50.828 "tls_version": 0, 00:42:50.828 "enable_ktls": false 00:42:50.828 } 00:42:50.828 }, 00:42:50.828 { 00:42:50.828 "method": "sock_impl_set_options", 00:42:50.828 "params": { 00:42:50.828 "impl_name": "posix", 00:42:50.828 "recv_buf_size": 2097152, 00:42:50.828 "send_buf_size": 2097152, 00:42:50.829 "enable_recv_pipe": true, 00:42:50.829 "enable_quickack": false, 00:42:50.829 "enable_placement_id": 0, 00:42:50.829 "enable_zerocopy_send_server": true, 00:42:50.829 "enable_zerocopy_send_client": false, 00:42:50.829 "zerocopy_threshold": 0, 00:42:50.829 "tls_version": 0, 00:42:50.829 "enable_ktls": false 00:42:50.829 } 00:42:50.829 } 00:42:50.829 ] 00:42:50.829 }, 00:42:50.829 { 00:42:50.829 "subsystem": "vmd", 00:42:50.829 "config": [] 00:42:50.829 }, 00:42:50.829 { 00:42:50.829 "subsystem": "accel", 00:42:50.829 "config": [ 00:42:50.829 { 00:42:50.829 "method": "accel_set_options", 00:42:50.829 "params": { 00:42:50.829 "small_cache_size": 128, 00:42:50.829 "large_cache_size": 16, 00:42:50.829 "task_count": 2048, 00:42:50.829 "sequence_count": 2048, 00:42:50.829 "buf_count": 2048 00:42:50.829 } 00:42:50.829 } 00:42:50.829 ] 00:42:50.829 }, 00:42:50.829 { 00:42:50.829 "subsystem": "bdev", 00:42:50.829 "config": [ 00:42:50.829 { 00:42:50.829 "method": "bdev_set_options", 00:42:50.829 "params": { 00:42:50.829 "bdev_io_pool_size": 65535, 00:42:50.829 "bdev_io_cache_size": 256, 00:42:50.829 "bdev_auto_examine": true, 00:42:50.829 "iobuf_small_cache_size": 128, 00:42:50.829 "iobuf_large_cache_size": 16 00:42:50.829 } 00:42:50.829 }, 00:42:50.829 { 00:42:50.829 "method": "bdev_raid_set_options", 00:42:50.829 "params": { 00:42:50.829 "process_window_size_kb": 1024, 00:42:50.829 "process_max_bandwidth_mb_sec": 0 00:42:50.829 } 00:42:50.829 }, 00:42:50.829 { 00:42:50.829 "method": "bdev_iscsi_set_options", 00:42:50.829 "params": { 00:42:50.829 "timeout_sec": 30 00:42:50.829 } 00:42:50.829 }, 00:42:50.829 { 00:42:50.829 "method": "bdev_nvme_set_options", 00:42:50.829 "params": { 00:42:50.829 "action_on_timeout": "none", 00:42:50.829 "timeout_us": 0, 00:42:50.829 "timeout_admin_us": 0, 00:42:50.829 "keep_alive_timeout_ms": 10000, 00:42:50.829 "arbitration_burst": 0, 00:42:50.829 "low_priority_weight": 0, 00:42:50.829 "medium_priority_weight": 0, 00:42:50.829 "high_priority_weight": 0, 00:42:50.829 "nvme_adminq_poll_period_us": 10000, 00:42:50.829 "nvme_ioq_poll_period_us": 0, 00:42:50.829 "io_queue_requests": 512, 00:42:50.829 "delay_cmd_submit": true, 00:42:50.829 "transport_retry_count": 4, 00:42:50.829 "bdev_retry_count": 3, 00:42:50.829 "transport_ack_timeout": 0, 00:42:50.829 "ctrlr_loss_timeout_sec": 0, 00:42:50.829 "reconnect_delay_sec": 0, 00:42:50.829 "fast_io_fail_timeout_sec": 0, 00:42:50.829 "disable_auto_failback": false, 00:42:50.829 "generate_uuids": false, 00:42:50.829 "transport_tos": 0, 00:42:50.829 "nvme_error_stat": false, 00:42:50.829 "rdma_srq_size": 0, 00:42:50.829 "io_path_stat": false, 00:42:50.829 "allow_accel_sequence": false, 00:42:50.829 "rdma_max_cq_size": 0, 00:42:50.829 "rdma_cm_event_timeout_ms": 0, 00:42:50.829 "dhchap_digests": [ 00:42:50.829 "sha256", 00:42:50.829 "sha384", 00:42:50.829 "sha512" 00:42:50.829 ], 00:42:50.829 "dhchap_dhgroups": [ 00:42:50.829 "null", 00:42:50.829 "ffdhe2048", 00:42:50.829 "ffdhe3072", 00:42:50.829 "ffdhe4096", 00:42:50.829 "ffdhe6144", 00:42:50.829 "ffdhe8192" 00:42:50.829 ] 00:42:50.829 } 00:42:50.829 }, 00:42:50.829 { 00:42:50.829 "method": "bdev_nvme_attach_controller", 00:42:50.829 "params": { 00:42:50.829 "name": "nvme0", 00:42:50.829 "trtype": "TCP", 00:42:50.829 "adrfam": "IPv4", 00:42:50.829 "traddr": "127.0.0.1", 00:42:50.829 "trsvcid": "4420", 00:42:50.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:50.829 "prchk_reftag": false, 00:42:50.829 "prchk_guard": false, 00:42:50.829 "ctrlr_loss_timeout_sec": 0, 00:42:50.829 "reconnect_delay_sec": 0, 00:42:50.829 "fast_io_fail_timeout_sec": 0, 00:42:50.829 "psk": "key0", 00:42:50.829 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:50.829 "hdgst": false, 00:42:50.829 "ddgst": false 00:42:50.829 } 00:42:50.829 }, 00:42:50.829 { 00:42:50.829 "method": "bdev_nvme_set_hotplug", 00:42:50.829 "params": { 00:42:50.829 "period_us": 100000, 00:42:50.829 "enable": false 00:42:50.829 } 00:42:50.829 }, 00:42:50.829 { 00:42:50.829 "method": "bdev_wait_for_examine" 00:42:50.829 } 00:42:50.829 ] 00:42:50.829 }, 00:42:50.829 { 00:42:50.829 "subsystem": "nbd", 00:42:50.829 "config": [] 00:42:50.829 } 00:42:50.829 ] 00:42:50.829 }' 00:42:50.829 [2024-07-22 20:50:02.806154] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:50.829 [2024-07-22 20:50:02.806268] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952104 ] 00:42:51.091 EAL: No free 2048 kB hugepages reported on node 1 00:42:51.091 [2024-07-22 20:50:02.928006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:51.091 [2024-07-22 20:50:03.063370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:51.352 [2024-07-22 20:50:03.319528] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:51.613 20:50:03 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:51.613 20:50:03 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:42:51.613 20:50:03 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:42:51.613 20:50:03 keyring_file -- keyring/file.sh@120 -- # jq length 00:42:51.613 20:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.874 20:50:03 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:42:51.874 20:50:03 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:42:51.874 20:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:51.874 20:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:51.874 20:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:51.874 20:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.874 20:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:51.874 20:50:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:51.874 20:50:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:42:51.874 20:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:51.874 20:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:51.874 20:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:51.874 20:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.874 20:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:52.135 20:50:04 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:42:52.135 20:50:04 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:42:52.135 20:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:52.135 20:50:04 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:42:52.396 20:50:04 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:42:52.396 20:50:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:52.396 20:50:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.wbgr8zA83t /tmp/tmp.f4hMsmPQKu 00:42:52.396 20:50:04 keyring_file -- keyring/file.sh@20 -- # killprocess 3952104 00:42:52.396 20:50:04 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3952104 ']' 00:42:52.396 20:50:04 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3952104 00:42:52.396 20:50:04 keyring_file -- common/autotest_common.sh@953 -- # uname 00:42:52.396 20:50:04 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:52.396 20:50:04 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3952104 00:42:52.396 20:50:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:52.396 20:50:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:52.397 20:50:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3952104' 00:42:52.397 killing process with pid 3952104 00:42:52.397 20:50:04 keyring_file -- common/autotest_common.sh@967 -- # kill 3952104 00:42:52.397 Received shutdown signal, test time was about 1.000000 seconds 00:42:52.397 00:42:52.397 Latency(us) 00:42:52.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:52.397 =================================================================================================================== 00:42:52.397 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:52.397 20:50:04 keyring_file -- common/autotest_common.sh@972 -- # wait 3952104 00:42:52.968 20:50:04 keyring_file -- keyring/file.sh@21 -- # killprocess 3950129 00:42:52.968 20:50:04 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3950129 ']' 00:42:52.968 20:50:04 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3950129 00:42:52.968 20:50:04 keyring_file -- common/autotest_common.sh@953 -- # uname 00:42:52.968 20:50:04 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:52.968 20:50:04 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3950129 00:42:52.968 20:50:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:52.968 20:50:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:52.968 20:50:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3950129' 00:42:52.968 killing process with pid 3950129 00:42:52.968 20:50:04 keyring_file -- common/autotest_common.sh@967 -- # kill 3950129 00:42:52.968 [2024-07-22 20:50:04.782246] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:42:52.968 20:50:04 keyring_file -- common/autotest_common.sh@972 -- # wait 3950129 00:42:54.884 00:42:54.884 real 0m13.403s 00:42:54.884 user 0m28.776s 00:42:54.884 sys 0m2.963s 00:42:54.884 20:50:06 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:54.884 20:50:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:54.884 ************************************ 00:42:54.884 END TEST keyring_file 00:42:54.884 ************************************ 00:42:54.884 20:50:06 -- common/autotest_common.sh@1142 -- # return 0 00:42:54.884 20:50:06 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:42:54.884 20:50:06 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:54.884 20:50:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:54.884 20:50:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:54.884 20:50:06 -- common/autotest_common.sh@10 -- # set +x 00:42:54.884 ************************************ 00:42:54.884 START TEST keyring_linux 00:42:54.884 ************************************ 00:42:54.884 20:50:06 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:54.884 * Looking for test storage... 00:42:54.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:54.884 20:50:06 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:54.884 20:50:06 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:54.884 20:50:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:54.884 20:50:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:54.884 20:50:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:54.884 20:50:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:54.884 20:50:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:54.884 20:50:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:54.885 20:50:06 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:54.885 20:50:06 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:54.885 20:50:06 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:54.885 20:50:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.885 20:50:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.885 20:50:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.885 20:50:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:54.885 20:50:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:54.885 20:50:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:54.885 20:50:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:54.885 20:50:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:54.885 20:50:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:54.885 20:50:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:54.885 20:50:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:54.885 /tmp/:spdk-test:key0 00:42:54.885 20:50:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:42:54.885 20:50:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:54.885 20:50:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:54.885 /tmp/:spdk-test:key1 00:42:54.885 20:50:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3952900 00:42:54.885 20:50:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3952900 00:42:54.885 20:50:06 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:54.885 20:50:06 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3952900 ']' 00:42:54.885 20:50:06 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:54.885 20:50:06 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:54.885 20:50:06 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:54.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:54.885 20:50:06 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:54.885 20:50:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:54.885 [2024-07-22 20:50:06.809605] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:54.885 [2024-07-22 20:50:06.809722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952900 ] 00:42:54.885 EAL: No free 2048 kB hugepages reported on node 1 00:42:55.146 [2024-07-22 20:50:06.920858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:55.146 [2024-07-22 20:50:07.096172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:55.717 20:50:07 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:55.717 20:50:07 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:42:55.717 20:50:07 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:55.717 20:50:07 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:55.717 20:50:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:55.717 [2024-07-22 20:50:07.681972] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:55.717 null0 00:42:55.717 [2024-07-22 20:50:07.714014] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:55.717 [2024-07-22 20:50:07.714450] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:55.717 20:50:07 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:55.717 20:50:07 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:55.717 701483278 00:42:55.978 20:50:07 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:55.978 254737898 00:42:55.978 20:50:07 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3953012 00:42:55.978 20:50:07 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3953012 /var/tmp/bperf.sock 00:42:55.978 20:50:07 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:55.978 20:50:07 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3953012 ']' 00:42:55.978 20:50:07 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:55.978 20:50:07 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:55.978 20:50:07 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:55.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:55.978 20:50:07 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:55.978 20:50:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:55.978 [2024-07-22 20:50:07.816606] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:55.978 [2024-07-22 20:50:07.816713] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953012 ] 00:42:55.978 EAL: No free 2048 kB hugepages reported on node 1 00:42:55.978 [2024-07-22 20:50:07.937269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:56.239 [2024-07-22 20:50:08.072437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:56.887 20:50:08 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:56.887 20:50:08 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:42:56.887 20:50:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:56.888 20:50:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:56.888 20:50:08 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:56.888 20:50:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:57.148 20:50:08 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:57.148 20:50:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:57.148 [2024-07-22 20:50:09.136708] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:57.409 nvme0n1 00:42:57.409 20:50:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:57.409 20:50:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:57.409 20:50:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:57.409 20:50:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:57.409 20:50:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:57.409 20:50:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.409 20:50:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:57.409 20:50:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:57.409 20:50:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:57.409 20:50:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:57.409 20:50:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:57.409 20:50:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.409 20:50:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:57.670 20:50:09 keyring_linux -- keyring/linux.sh@25 -- # sn=701483278 00:42:57.670 20:50:09 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:57.670 20:50:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:57.670 20:50:09 keyring_linux -- keyring/linux.sh@26 -- # [[ 701483278 == \7\0\1\4\8\3\2\7\8 ]] 00:42:57.670 20:50:09 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 701483278 00:42:57.670 20:50:09 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:57.670 20:50:09 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:57.670 Running I/O for 1 seconds... 00:42:59.055 00:42:59.055 Latency(us) 00:42:59.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:59.055 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:59.055 nvme0n1 : 1.01 8666.62 33.85 0.00 0.00 14656.78 11468.80 23811.41 00:42:59.055 =================================================================================================================== 00:42:59.055 Total : 8666.62 33.85 0.00 0.00 14656.78 11468.80 23811.41 00:42:59.055 0 00:42:59.055 20:50:10 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:59.055 20:50:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:59.055 20:50:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:59.055 20:50:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:59.055 20:50:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:59.055 20:50:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:59.055 20:50:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:59.055 20:50:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:59.055 20:50:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:59.055 20:50:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:59.055 20:50:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:59.055 20:50:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:59.055 20:50:11 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:42:59.055 20:50:11 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:59.055 20:50:11 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:59.055 20:50:11 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:59.055 20:50:11 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:59.055 20:50:11 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:59.055 20:50:11 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:59.055 20:50:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:59.317 [2024-07-22 20:50:11.158678] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:59.317 [2024-07-22 20:50:11.158976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d180 (107): Transport endpoint is not connected 00:42:59.317 [2024-07-22 20:50:11.159962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500038d180 (9): Bad file descriptor 00:42:59.317 [2024-07-22 20:50:11.160960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:59.317 [2024-07-22 20:50:11.160976] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:59.317 [2024-07-22 20:50:11.160984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:59.317 request: 00:42:59.317 { 00:42:59.317 "name": "nvme0", 00:42:59.317 "trtype": "tcp", 00:42:59.317 "traddr": "127.0.0.1", 00:42:59.317 "adrfam": "ipv4", 00:42:59.317 "trsvcid": "4420", 00:42:59.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:59.317 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:59.317 "prchk_reftag": false, 00:42:59.317 "prchk_guard": false, 00:42:59.317 "hdgst": false, 00:42:59.317 "ddgst": false, 00:42:59.317 "psk": ":spdk-test:key1", 00:42:59.317 "method": "bdev_nvme_attach_controller", 00:42:59.317 "req_id": 1 00:42:59.317 } 00:42:59.317 Got JSON-RPC error response 00:42:59.317 response: 00:42:59.317 { 00:42:59.317 "code": -5, 00:42:59.317 "message": "Input/output error" 00:42:59.317 } 00:42:59.317 20:50:11 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:42:59.317 20:50:11 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:59.317 20:50:11 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:59.317 20:50:11 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:59.317 20:50:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:59.317 20:50:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:59.317 20:50:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:59.317 20:50:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:59.317 20:50:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:59.317 20:50:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:59.317 20:50:11 keyring_linux -- keyring/linux.sh@33 -- # sn=701483278 00:42:59.317 20:50:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 701483278 00:42:59.317 1 links removed 00:42:59.317 20:50:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:59.317 20:50:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:59.318 20:50:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:59.318 20:50:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:59.318 20:50:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:59.318 20:50:11 keyring_linux -- keyring/linux.sh@33 -- # sn=254737898 00:42:59.318 20:50:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 254737898 00:42:59.318 1 links removed 00:42:59.318 20:50:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3953012 00:42:59.318 20:50:11 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3953012 ']' 00:42:59.318 20:50:11 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3953012 00:42:59.318 20:50:11 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:42:59.318 20:50:11 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:59.318 20:50:11 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3953012 00:42:59.318 20:50:11 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:59.318 20:50:11 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:59.318 20:50:11 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3953012' 00:42:59.318 killing process with pid 3953012 00:42:59.318 20:50:11 keyring_linux -- common/autotest_common.sh@967 -- # kill 3953012 00:42:59.318 Received shutdown signal, test time was about 1.000000 seconds 00:42:59.318 00:42:59.318 Latency(us) 00:42:59.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:59.318 =================================================================================================================== 00:42:59.318 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:59.318 20:50:11 keyring_linux -- common/autotest_common.sh@972 -- # wait 3953012 00:42:59.889 20:50:11 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3952900 00:42:59.889 20:50:11 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3952900 ']' 00:42:59.889 20:50:11 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3952900 00:42:59.889 20:50:11 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:42:59.889 20:50:11 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:59.889 20:50:11 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3952900 00:42:59.889 20:50:11 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:59.889 20:50:11 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:59.889 20:50:11 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3952900' 00:42:59.889 killing process with pid 3952900 00:42:59.889 20:50:11 keyring_linux -- common/autotest_common.sh@967 -- # kill 3952900 00:42:59.889 20:50:11 keyring_linux -- common/autotest_common.sh@972 -- # wait 3952900 00:43:01.802 00:43:01.802 real 0m6.953s 00:43:01.802 user 0m10.818s 00:43:01.802 sys 0m1.613s 00:43:01.802 20:50:13 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:01.802 20:50:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:01.802 ************************************ 00:43:01.802 END TEST keyring_linux 00:43:01.802 ************************************ 00:43:01.802 20:50:13 -- common/autotest_common.sh@1142 -- # return 0 00:43:01.802 20:50:13 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:43:01.802 20:50:13 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:43:01.802 20:50:13 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:43:01.802 20:50:13 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:43:01.802 20:50:13 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:43:01.802 20:50:13 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:43:01.802 20:50:13 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:43:01.802 20:50:13 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:43:01.802 20:50:13 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:43:01.802 20:50:13 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:43:01.802 20:50:13 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:43:01.802 20:50:13 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:43:01.802 20:50:13 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:43:01.802 20:50:13 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:43:01.802 20:50:13 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:43:01.802 20:50:13 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:43:01.802 20:50:13 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:43:01.802 20:50:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:01.802 20:50:13 -- common/autotest_common.sh@10 -- # set +x 00:43:01.802 20:50:13 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:43:01.802 20:50:13 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:43:01.802 20:50:13 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:43:01.802 20:50:13 -- common/autotest_common.sh@10 -- # set +x 00:43:08.387 INFO: APP EXITING 00:43:08.387 INFO: killing all VMs 00:43:08.649 INFO: killing vhost app 00:43:08.649 WARN: no vhost pid file found 00:43:08.649 INFO: EXIT DONE 00:43:11.947 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:65:00.0 (144d a80a): Already using the nvme driver 00:43:11.947 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:43:11.947 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:43:15.245 Cleaning 00:43:15.245 Removing: /var/run/dpdk/spdk0/config 00:43:15.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:15.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:15.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:15.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:15.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:15.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:15.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:15.245 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:15.245 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:15.245 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:15.245 Removing: /var/run/dpdk/spdk1/config 00:43:15.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:15.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:15.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:15.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:15.245 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:15.506 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:15.506 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:15.506 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:15.506 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:15.506 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:15.506 Removing: /var/run/dpdk/spdk1/mp_socket 00:43:15.506 Removing: /var/run/dpdk/spdk2/config 00:43:15.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:15.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:15.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:15.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:15.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:15.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:15.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:15.506 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:15.506 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:15.506 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:15.506 Removing: /var/run/dpdk/spdk3/config 00:43:15.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:15.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:15.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:15.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:15.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:15.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:15.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:15.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:15.506 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:15.506 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:15.506 Removing: /var/run/dpdk/spdk4/config 00:43:15.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:15.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:15.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:15.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:15.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:15.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:15.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:15.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:15.506 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:15.506 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:15.506 Removing: /dev/shm/bdev_svc_trace.1 00:43:15.506 Removing: /dev/shm/nvmf_trace.0 00:43:15.506 Removing: /dev/shm/spdk_tgt_trace.pid3377725 00:43:15.506 Removing: /var/run/dpdk/spdk0 00:43:15.506 Removing: /var/run/dpdk/spdk1 00:43:15.506 Removing: /var/run/dpdk/spdk2 00:43:15.506 Removing: /var/run/dpdk/spdk3 00:43:15.506 Removing: /var/run/dpdk/spdk4 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3375077 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3377725 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3378582 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3379952 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3380632 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3382035 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3382368 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3382901 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3384241 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3385075 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3385804 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3386436 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3386983 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3387672 00:43:15.506 Removing: /var/run/dpdk/spdk_pid3388037 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3388392 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3388783 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3390159 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3393751 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3394452 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3395154 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3395210 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3396872 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3396889 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3398441 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3398601 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3399295 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3399376 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3400111 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3400211 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3401240 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3401818 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3402497 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3403163 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3403508 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3403882 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3404271 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3404634 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3405200 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3405669 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3406029 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3406524 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3407071 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3407433 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3407831 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3408469 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3408834 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3409207 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3409785 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3410234 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3410597 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3411126 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3411634 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3412011 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3412523 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3413057 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3413453 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3414202 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3418934 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3424078 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3436131 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3436873 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3442197 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3442582 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3447940 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3455712 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3458979 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3471813 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3482829 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3485016 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3486226 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3508051 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3512944 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3611387 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3618006 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3625092 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3635937 00:43:15.769 Removing: /var/run/dpdk/spdk_pid3668305 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3673702 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3675708 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3678049 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3678393 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3678672 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3678937 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3679802 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3682088 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3683361 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3684279 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3687545 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3688443 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3689479 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3694684 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3701360 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3707103 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3752638 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3757466 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3764996 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3766828 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3769004 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3774565 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3780084 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3789446 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3789455 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3794533 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3794839 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3795177 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3795641 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3795798 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3797189 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3799157 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3801056 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3802988 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3804965 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3806916 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3814324 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3815082 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3816275 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3817781 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3825127 00:43:16.032 Removing: /var/run/dpdk/spdk_pid3828365 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3834909 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3841472 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3851526 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3860339 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3860345 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3883297 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3884107 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3884932 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3885800 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3887036 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3887895 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3888576 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3889350 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3894631 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3894988 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3902336 00:43:16.033 Removing: /var/run/dpdk/spdk_pid3902552 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3905236 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3912673 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3912680 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3918568 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3921171 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3924143 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3925644 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3928162 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3929683 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3939795 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3940321 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3940964 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3944148 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3944780 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3945453 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3950129 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3950329 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3952104 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3952900 00:43:16.297 Removing: /var/run/dpdk/spdk_pid3953012 00:43:16.297 Clean 00:43:16.297 20:50:28 -- common/autotest_common.sh@1451 -- # return 0 00:43:16.297 20:50:28 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:43:16.297 20:50:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:16.297 20:50:28 -- common/autotest_common.sh@10 -- # set +x 00:43:16.297 20:50:28 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:43:16.297 20:50:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:16.297 20:50:28 -- common/autotest_common.sh@10 -- # set +x 00:43:16.297 20:50:28 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:16.297 20:50:28 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:16.297 20:50:28 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:16.297 20:50:28 -- spdk/autotest.sh@391 -- # hash lcov 00:43:16.297 20:50:28 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:43:16.558 20:50:28 -- spdk/autotest.sh@393 -- # hostname 00:43:16.558 20:50:28 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:16.558 geninfo: WARNING: invalid characters removed from testname! 00:43:38.589 20:50:49 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:41.135 20:50:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:42.519 20:50:54 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:43.903 20:50:55 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:45.288 20:50:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:47.198 20:50:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:48.582 20:51:00 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:48.582 20:51:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:48.582 20:51:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:48.582 20:51:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:48.582 20:51:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:48.582 20:51:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:48.582 20:51:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:48.582 20:51:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:48.582 20:51:00 -- paths/export.sh@5 -- $ export PATH 00:43:48.582 20:51:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:48.582 20:51:00 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:43:48.582 20:51:00 -- common/autobuild_common.sh@447 -- $ date +%s 00:43:48.582 20:51:00 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721674260.XXXXXX 00:43:48.582 20:51:00 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721674260.9WqvTJ 00:43:48.582 20:51:00 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:43:48.582 20:51:00 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:43:48.582 20:51:00 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:43:48.582 20:51:00 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:43:48.582 20:51:00 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:43:48.582 20:51:00 -- common/autobuild_common.sh@463 -- $ get_config_params 00:43:48.582 20:51:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:43:48.582 20:51:00 -- common/autotest_common.sh@10 -- $ set +x 00:43:48.582 20:51:00 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:43:48.582 20:51:00 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:43:48.582 20:51:00 -- pm/common@17 -- $ local monitor 00:43:48.582 20:51:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:48.582 20:51:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:48.582 20:51:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:48.582 20:51:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:48.582 20:51:00 -- pm/common@21 -- $ date +%s 00:43:48.582 20:51:00 -- pm/common@25 -- $ sleep 1 00:43:48.582 20:51:00 -- pm/common@21 -- $ date +%s 00:43:48.582 20:51:00 -- pm/common@21 -- $ date +%s 00:43:48.582 20:51:00 -- pm/common@21 -- $ date +%s 00:43:48.582 20:51:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721674260 00:43:48.582 20:51:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721674260 00:43:48.582 20:51:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721674260 00:43:48.582 20:51:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721674260 00:43:48.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721674260_collect-vmstat.pm.log 00:43:48.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721674260_collect-cpu-load.pm.log 00:43:48.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721674260_collect-cpu-temp.pm.log 00:43:48.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721674260_collect-bmc-pm.bmc.pm.log 00:43:49.524 20:51:01 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:43:49.524 20:51:01 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:43:49.524 20:51:01 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:49.524 20:51:01 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:43:49.524 20:51:01 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:43:49.524 20:51:01 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:43:49.524 20:51:01 -- spdk/autopackage.sh@19 -- $ timing_finish 00:43:49.524 20:51:01 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:49.524 20:51:01 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:43:49.524 20:51:01 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:49.524 20:51:01 -- spdk/autopackage.sh@20 -- $ exit 0 00:43:49.524 20:51:01 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:43:49.524 20:51:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:43:49.524 20:51:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:43:49.524 20:51:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:49.524 20:51:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:43:49.524 20:51:01 -- pm/common@44 -- $ pid=3966706 00:43:49.524 20:51:01 -- pm/common@50 -- $ kill -TERM 3966706 00:43:49.524 20:51:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:49.524 20:51:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:43:49.524 20:51:01 -- pm/common@44 -- $ pid=3966707 00:43:49.524 20:51:01 -- pm/common@50 -- $ kill -TERM 3966707 00:43:49.524 20:51:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:49.524 20:51:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:43:49.524 20:51:01 -- pm/common@44 -- $ pid=3966709 00:43:49.524 20:51:01 -- pm/common@50 -- $ kill -TERM 3966709 00:43:49.524 20:51:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:49.524 20:51:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:43:49.524 20:51:01 -- pm/common@44 -- $ pid=3966726 00:43:49.524 20:51:01 -- pm/common@50 -- $ sudo -E kill -TERM 3966726 00:43:49.524 + [[ -n 3255247 ]] 00:43:49.524 + sudo kill 3255247 00:43:49.535 [Pipeline] } 00:43:49.553 [Pipeline] // stage 00:43:49.559 [Pipeline] } 00:43:49.575 [Pipeline] // timeout 00:43:49.580 [Pipeline] } 00:43:49.597 [Pipeline] // catchError 00:43:49.602 [Pipeline] } 00:43:49.616 [Pipeline] // wrap 00:43:49.620 [Pipeline] } 00:43:49.634 [Pipeline] // catchError 00:43:49.642 [Pipeline] stage 00:43:49.644 [Pipeline] { (Epilogue) 00:43:49.658 [Pipeline] catchError 00:43:49.659 [Pipeline] { 00:43:49.673 [Pipeline] echo 00:43:49.674 Cleanup processes 00:43:49.679 [Pipeline] sh 00:43:49.972 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:49.972 3966811 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:43:49.972 3967305 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:49.986 [Pipeline] sh 00:43:50.275 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:50.275 ++ grep -v 'sudo pgrep' 00:43:50.275 ++ awk '{print $1}' 00:43:50.275 + sudo kill -9 3966811 00:43:50.288 [Pipeline] sh 00:43:50.577 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:02.834 [Pipeline] sh 00:44:03.178 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:03.178 Artifacts sizes are good 00:44:03.194 [Pipeline] archiveArtifacts 00:44:03.202 Archiving artifacts 00:44:03.454 [Pipeline] sh 00:44:03.739 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:03.754 [Pipeline] cleanWs 00:44:03.766 [WS-CLEANUP] Deleting project workspace... 00:44:03.766 [WS-CLEANUP] Deferred wipeout is used... 00:44:03.774 [WS-CLEANUP] done 00:44:03.776 [Pipeline] } 00:44:03.795 [Pipeline] // catchError 00:44:03.806 [Pipeline] sh 00:44:04.093 + logger -p user.info -t JENKINS-CI 00:44:04.102 [Pipeline] } 00:44:04.116 [Pipeline] // stage 00:44:04.127 [Pipeline] } 00:44:04.142 [Pipeline] // node 00:44:04.146 [Pipeline] End of Pipeline 00:44:04.176 Finished: SUCCESS